Thought Leadership:

UAT for a CTRM:

Getting It Right Before Go-Live

User Acceptance Testing, UAT, is one of the most critical and often the most misunderstood phase of any CTRM implementation.

Over the course of more than 25 years implementing CTRM systems across crude oil, refined products, LNG, LPG, coal, ore, metals, concentrates, and biofuels, there is one phase that determines whether an implementation succeeds or fails, that is the UAT.

Done well, UAT gives the client confidence, surfaces issues before they become operational problems, and produces a set of documented use cases that become invaluable reference guides post-go-live. Done poorly, it gives a false sense of readiness, and the real testing happens on day one of live trading, which is not somewhere you want to be discovering gaps.

In this article, we are going to walk through what good UAT looks like for a CTRM implementation, what the common mistakes are, and how to avoid them. This is not a theoretical exercise. These are the lessons we have learned from implementations that went smoothly and from the ones that did not.

What is UAT?

Let us start with a misconception. UAT is not a final check by the IT department that the software works. The software has already been tested. That is the vendor’s job during configuration and QA. UAT is about the business users confirming that the configured system meets their specific operational needs.

The difference matters enormously. IT testing checks whether a button does what a button should do. UAT checks whether a trader can capture a physical deal end-to-end, with the generation of documents, hedging the pricing/FX exposure, capturing all logistical activities, verifying P&L, and managing payments.

UAT should be led by the client, with the vendor’s business analysts in support. If the vendor is leading UAT on behalf of the client, something has gone wrong. The client must own this phase as their teams need to validate the system, because they are the ones who will use the CTRM each day.

UAT is final hurdle before go-live when the business truly confirms that the system delivers real world value, where spreadsheets collide with structured workflows, where traders test formulas they have lived with for years, and where middle office and risk teams confirm the CTRM outputs. And so often, it is here that hidden risks and overlooked processes are uncovered.

UAT can be thought of as a dress rehearsal for live operations. The client needs to simulate real working days before the curtain goes up at go-live.

Building Your UAT Scenarios:

The Test Pack

The quality of UAT is only as good as the scenarios that are tested. This is where clients can run into trouble, by not producing a scenario-based test pack.

During the implementation phase, the vendor’s business analysts will ask the client to document test cases for the UAT phase. These are real-life business use cases that the system needs to manage.

The most common mistake we see at Amphora is when clients submit generic or simplified scenarios. A trader puts forward a fixed-price trade. An operator assesses a straightforward scheduling entry. A finance user evaluates a basic payment.

These scenarios pass, the team ticks the box, and everyone moves on. Then shortly post-go-live, someone needs to handle a deal with a floating price formula using multiple quotes, currencies, UoM and specifications, and nobody has tested how the system manages it.

The scenarios you test in UAT need to reflect the actual complexity of your business.

That means including:

  • Capturing your most common deal types, the transactions that happen every week
  • Your most complex operations, the edge cases that only happen occasionally but cannot go wrong when they do
  • The trade variable pricing formulas that currently cause you the most pain
  • Month-end processes, which can surface issues that day-to-day testing misses
  • Exception and error handling, what happens when a price needs to be corrected or a shipment is cancelled

Part of UAT is also ensuring that the flow of the work is properly supported by the system, the workflow. If users get confused or run into issues regarding where to go next to manage a transaction, this can point to a deficiency in the system, its documentation or user training.

Users who have been actively involved in defining the scope and requirements during the implementation will naturally produce better scenarios, because they have a clearer understanding on how the system has been configured. Those who have been less involved often find themselves testing scenarios that do not reflect their real day-to-day needs. This is one of the strongest arguments for involving all key users from the very beginning of the project, not just at the UAT stage.

The documentation produced during UAT, the scenarios, the detailed test steps, the screenshots, the issue log, should be archived and made accessible to the team post-go-live. These materials have considerable value. New joiners can use them to understand how the system works and current users can reference them when facing unfamiliar scenarios. They form the foundation of your internal CTRM knowledge base.

A concise summary of each scenario should be created, with direct links to the corresponding test steps and P&L calculations to ensure full transparency and traceability.

Document each scenario with screenshots as you test. These naturally evolve into a how-to user guide post-go-live for existing and new team members.

Who Should Be in the Room

UAT is not a one-department activity. A CTRM touches nearly every part of a trading business; front office, middle office, operations, finance, risk. Each of these functions needs to validate the workflows that are relevant to them.

In practice, what we often see is that UAT is treated as a front office exercise. The traders test deal capture. The system is declared ready. Then on day one, the operations team discovers that the scheduling workflow does not match how they manage deliveries, or the finance team realises that the invoice format does not meet their requirements for the accounting system. These are not minor issues to resolve when the business is live.

The right approach is to run UAT streams in parallel by function:

  • Front office: deal capture, pricing, recap generation, trade confirmation
  • Middle office: position reporting, exposure monitoring, hedge matching, mark-to-market, daily P&L attribution
  • Operations/logistics: scheduling, nominations, adjustments, rolling inventory
  • Finance: invoice generation, cost capture, payment processing, cash forecasting, credit exposures, VAT
  • Risk: VaR reporting, risk dashboards

In a previous article, we have highlighted the importance of a client having Super-Users. The Super-Users for each function should be driving UAT for their area. Super-Users are the people who have been comprehensively trained across all relevant areas of the system. They are best placed to design the test scenarios and to spot when something is not behaving as expected.

Senior management also have a role here. Not to run the tests, but to make clear to the teams that UAT is a priority. We see this regularly: UAT is scheduled, but the traders and operations staff are pulled back into their day jobs because a deal needs to get done. The result is a rushed UAT phase with gaps that show up post-go-live. Management must buy in to the UAT priority and protect the time assigned.

Assign a UAT lead on the client side whose job during this phase is to coordinate across functions, track issue resolution, and make sure nothing falls through the cracks. This person does not need to be senior, but they need authority to chase progress.

Managing Issues Found During UAT

UAT will surface issues. That is its purpose. The question is how those issues are categorised, tracked, and resolved, with the process being agreed before implementation begins.

Not all issues are equal, which has been covered in a previous article. A well-run UAT process uses a clear severity framework.

Every issue raised during UAT should be logged centrally using a shared tracker, such as a JIRA board or a simple spreadsheet. The key thing is that nothing is tracked in someone’s email inbox or communicated verbally and forgotten. Each issue needs an owner, a severity, a target resolution date, and a record of when it was resolved and retested.

The retesting step is one that gets skipped more than it should. An issue is raised, the vendor fixes it, the fix is deployed, and the team moves on to the next scenario without verifying that the fix works. Retesting every resolved issue is not optional and must be completed before the ticket is closed.

One point worth making clearly: do not allow a long tail of “minor” issues to accumulate without resolution. We have seen situations where 10 or 20 low-to-medium severity issues are deemed acceptable for go-live, and the combined operational burden of working around all of them makes the early weeks of live use extremely difficult. Judgement is required. A few low-severity items are fine. Many of them together can be debilitating.

The acceptable number of open issues at go-live, broken down by agreed severity levels, should be defined and approved jointly before UAT begins.

UAT progress can then be monitored through a regular summary of testing results and issue status shared transparently with the client.

Set a clear go/no-go criteria before UAT begins. Define in advance how many open issues of each severity level would cause you to delay go-live. That conversation is much easier to have before UAT than during it.

The UAT Environment

A point that seems obvious but is often overlooked: UAT should be conducted in a dedicated UAT environment, not in production. The UAT environment should be configured identically to production, loaded with realistic data, or ideally with a direct copy of the production database. If users are testing with non-production data, they could miss issues or falsely report an issue which was solely down to data inconsistencies.

The client should have access to this UAT environment as a permanent fixture, not just during implementation, but ongoing, with the ability to refresh with the production database when requested. This environment is where new releases should be tested before they are promoted to production, and where users can be trained and safely experiment with new configurations or scenarios without any risk to live data.

While a dedicated UAT environment provides the highest level of isolation for testing, it can be that for some smaller clients the additional cost may not always be justified.

As a flexible alternative, UAT activities can be conducted within the production environment using a fully segregated ‘UAT portfolio’ approach. This method relies on strict data separation, user access controls, and clearly defined governance to ensure that test activity does not impact live trading or reporting.

This approach can offer a cost-effective option for lower-risk testing scenarios, while more complex or business critical changes may still warrant a dedicated UAT environment.

The Striving for Perfection Problem

There is a balance to be struck during UAT that is genuinely difficult to get right. On one hand, the system needs to be fit for purpose before go-live. On the other, no system is ever perfect, and waiting for perfection is a reliable way to never go-live.

We see this most often when clients want the CTRM to replicate exactly what they did in their previous system or in their Excel-based process. This is understandable as people are familiar with what they have, and change is uncomfortable. But it is important to recognise that implementing a CTRM means some changes on the client side are also required. Some processes will be different, some reports will look different, and that is not necessarily a failure of the system.

The question to ask for each outstanding issue is not “is this the same as what we had before?” but “can we operate the business effectively with this system as it stands?” If the answer is yes, go live. The refinements can follow in the first release cycle.

Striving for the perfect solution introduces delays, potentially additional costs, and project risk. The vendor’s team has finite capacity and keeping them engaged past their planned project end date to resolve marginal issues has knock-on effects. Accept short-term workarounds for genuinely minor issues, document them clearly, and get them on the vendor’s roadmap, with a target resolution date. Then go live.

The real test of a CTRM is not whether it is perfect at go-live. It is whether the vendor is responsive to issues, releases improvements regularly, and communicates openly. Evaluate the partnership, not just the product.

UAT Sign-Off:

Getting It Done Properly

UAT concludes with formal sign-off. This is not a formality; it is an important milestone that should be treated as such. Sign-off means the business has confirmed that the system, as configured, is fit for purpose and the organisation is ready to go live.

Sign-off should not be through an informal verbal agreement but obtained in writing, from the appropriate representatives of each functional area that participated in UAT. It should reference the scenarios that were tested, the issues that were raised, the ones that were resolved, and the ones that are being carried forward with an agreed resolution plan. The vendor should guide you through this pre-templated process.

If a functional area declines to sign off, that needs to be taken seriously. Either the issues they have raised need to be resolved, or a clear decision needs to be made, at the appropriate level of seniority, such as the Project Steering Committee, that the outstanding items are acceptable for go-live.

Final Thoughts

UAT is not a checkbox at the end of an implementation with green lights and signature. It is about ensuring users feel at home in the new system, so they can take ownership of the CTRM they are about to depend on. Done well, it builds confidence, surfaces genuine issues at the right time, and produces documentation that supports the organisation for years. Done poorly, it creates a false sense of readiness that unravels quickly post-go-live.

The investment required to run UAT properly including good scenarios, the right people, dedicated time, clear issue management and realistic test data, is modest compared to the cost of resolving issues post-go-live, when trades are live and operations cannot stop.

If you are preparing for a CTRM implementation and want to discuss how to structure your UAT phase, or if you are mid-implementation and want a sense check on your current approach, feel free to get in touch. We have been through this process more times than we can count, and we are always glad to share what we have learned.

Real time commodity trading and risk management software