People moves

People moves: new global co-heads at Citi, Tran takes Asia-Pacific role at Crédit Agricole, RBC Capital Markets hires in sales, and more

Citi's offices in Canary Wharf, London

Citi has appointed Douglas Adams and James Fleming as global co-heads of equity capital markets (ECM). Adams has been promoted from co-head of North America ECM. He joined Smith Barney’s financial sponsors group in 1993, and has held various equity roles over the past 26 years. Fleming joined Citi last year as co-head of equity ECM for Europe, the Middle East and Africa (Emea), and has previously worked at Bank of America Merrill Lynch and UBS. In their new roles, Adams and Fleming report to Tyler Dickson and Manolo Falco, co-heads of global markets.

Meanwhile, Citi’s head of Emea capital markets execution, Roger Barb, has left the bank after 19 years. He joins BNP Paribas as head of ECM execution, reporting to Andreas Bernstorff, head of ECM for Emea.

Citi has also named Elree Winnett Seelig as its first head of environmental, social and corporate governance within its markets and securities services unit. Based in London, Seelig will report to both Jim O’Donnell, global head of investor sales and relationship management, and Leo Arduini, regional head of Emea markets and securities services. Prior to joining Citi, Seelig was head of energy client management for commodities at BNP Paribas. Earlier, she worked in natural resources investment banking at Lehman Brothers and in project finance at Bank of America.

Seelig will work alongside Olga Sviatochevski, who will be joining the team from Citi’s Emea corporate strategy department. Prior to joining the bank in 2014, Sviatochevski was a strategy consultant at Booz & Company, with experience across the oil and gas, and financial services sectors.

Andy Tran, head of credit markets and XVA risk management at Crédit Agricole, has been appointed head of market risk for Asia-Pacific. Tran joined the French bank in 2007, from Barclays, where he served as associate director of market risk engineering for two years. Before that, he spent four years as a senior associate within TD Securities’ market risk team, according to his LinkedIn profile. Based in Hong Kong, Tran reports to Olivier Harou, chief risk officer for Asia-Pacific.

RBC Capital Markets has hired Richard Boardman as head of financial institution solutions. Boardman joins from Nomura, where he spent seven years as head of solutions sales for Emea. Previously, he spent four years at RBS as head of UK and Nordic insurance sales solutions. Based in London, Boardman reports to Jason Goss, co-head of European fixed income, currencies and commodities (Ficc) sales.

Meanwhile, RBC has hired Charles Atkinson as head of institutional client management for Emea. Atkinson also joins from Nomura, where he spent six years as head of senior relationship management for Emea. Before that, he held roles in relationship management at JP Morgan and Bear Stearns. In his new role, Atkinson will be based in London, reporting to both Sian Hurrell, global head of foreign exchange and head of Ficc Europe, and Michelle Neal, head of US Ficc and head of institutional client management.

Charlie Scharf has been appointed chief executive officer of Wells Fargo, joining from BNY Mellon, where he spent the last two years as CEO and chairman. Before that, Scharf was head of Visa for four years. He also previously served as CEO of JP Morgan’s financial services division and chief financial officer of Citigroup’s investment banking business. In his new role at Wells Fargo, Scharf succeeds Tim Sloan, who resigned in March.

Meanwhile, Richard Payne Jr has been elected to Wells Fargo’s board of directors and will serve on the board’s credit committee. Payne retired as vice-chairman of wholesale banking at US Bancorp in 2016, after four years in the role. Prior to joining US Bancorp in 2010, he was head of capital markets at National City Corporation, and served in various corporate banking and leadership roles for predecessor companies of Wells Fargo, Bank of America, JP Morgan and Morgan Stanley.

Simon Nursey

Forex options e-trading platform Digital Vega has appointed Simon Nursey as head of Asia-Pacific. Nursey joins from Standard Chartered, where he was head of foreign exchange options trading. Previously, he spent 17 years at BNP Paribas, latterly as global head of currency options trading.

Asa Attwell joins Digital Vega as head of product development. Attwell spent two years as head of Emea foreign exchange and emerging markets at Nomura. Before that, he held a variety of forex roles at BNP Paribas, most recently as head of G10 foreign exchange trading.

Meanwhile, Laura Winkler joins as Emea relationship manager. Winkler leaves tech firm Luxoft Financial Services, where she was a relationship manager. Prior to that she held a similar role at trading platform Currenex.

In their new roles, Attwell and Winkler report to Mark Suter, executive chairman and co-founder of Digital Vega.

UBS Asset Management has appointed Barry Gill as head of investments. Gill will oversee $710 billion in assets under management across both traditional and alternative asset classes. Gil has spent 25 years at the firm, most recently as head of active equities. In his new role, he succeeds Suni Harford, following her recent appointment as president of UBS AM and a member of the group executive board of UBS. Gill will become a member of the UBS AM executive committee and report to Harford.

Gill is succeeded as head of active equities by Ian McIntosh, who has been deputy head since 2016. In his new role, he reports to Gill.

Themos Fiotakis, head of fundamental strategy across assets, has left UBS after four months in the role. Fiotakis joined the Swiss bank in May 2015, as co-head of foreign exchange and fixed income research and strategy. Before that, he spent 11 years at Goldman Sachs, most recently as head of emerging markets foreign exchange strategy. According to his LinkedIn profile, Fiotakis is currently on gardening leave and plans to move to the buy side.

Matthias Graulich

Marcel Naas and Marcus Addison will become managing directors at Eurex Securities Transactions Services, a buy-in agent business due to launch next year. Naas will hand over his role as head of Eurex Repo to Matthias Graulich. Graulich will still serve as executive board member of Eurex Clearing, a position he has held since 2014, as well as continuing to be responsible for the fixed income, funding and financing strategy, and product development area at Deutsche Börse Group.

Naas will continue as a member of the executive board of Eurex Global Derivatives and Addison retains his role in the risk area of Eurex Clearing.

TP Icap has appointed Joanna Nader as group head of strategy. Nader joins from RBC Capital Markets, where she spent the last year as head of diversified/speciality financials research. Before that, she spent almost 10 years as chief investment officer of private equity firm JRJ Group and eight years at Lehman Brothers. In her new role, Nader reports to group CEO Nicolas Breteau.

Credit Suisse has hired Alex Diaconescu as a credit trader, trading credit default swap indexes. Diaconescu joins from Nomura, where he was a credit trader for two years. Previously, he spent five years as an associate credit index trader at Citi, his LinkedIn profile states. In his new role at the Swiss bank, he reports to Chris Orr, head of sales, trading and sector strategy for the investment grade business in Emea.

Seema Hingorani has been appointed as a managing director at Morgan Stanley Investment Management. Hingorani is the founder of Girls Who Invest, a non-profit organisation that aims to bring more women into investment and portfolio management roles. Prior to this, she was chief investment officer for the New York City Retirement System. Hingorani has also served as global head of research at Pyramis Global Advisors and Fidelity Institutional Asset Management, a partner and portfolio manager at Andor Capital, and an equity analyst at T. Rowe Price.

Zak de Mariveles has joined Hilbert Investment Solutions as a non-executive director. De Mariveles was previously a managing director at Societe Generale for six years. Earlier, he spent six years as a managing director at RBS and 13 years as a director at Barclays Capital. De Mariveles is also founder and chairman of the UK Structured Products Association.

HSBC Global Asset Management has appointed John Ware as senior product specialist for hedge funds in its alternatives team. Based in London, Ware will report to Steven Ward, global head of alternative products. He joins from BlackRock, where he was a senior product specialist for BlackRock Alternative Advisors, the firm’s hedge fund solutions platform. Prior to joining the firm in 2010, Ware worked in the ultra-high-net-worth client team at Barclays Wealth as a specialist, delivering cross-platform solutions, including alternative investment opportunities.

David Warren is to step down as group chief financial officer and member of the board at the London Stock Exchange Group by the end of 2020, continuing until the close of the group’s acquisition of Refinitiv. Warren joined LSEG in 2012, after a stint at Nasdaq where he was chief financial officer for nine years.

Elizabeth Geoghegan

Elizabeth Geoghegan has been hired by Irish fund manager Mediolanum International Funds as a fixed income portfolio manager. Geoghegan joins from Goodbody Asset Management, where she spent four years as a fixed income manager. In her new role, Geoghegan reports to Charles Diebel, head of fixed income.

Industry association Invest Europe has appointed Eric de Montgolfier as CEO, effective from the end of December. De Montgolfier joins from Brussels-listed investment manager Gimv, where he was partner and head of Gimv France, responsible for overseeing investment and group strategy. Previously, he spent 11 years as co-founding managing partner and chief operations officer at Edmond de Rothschild Capital Partners. At Invest Europe, de Montgolfier succeeds Michael Collins, who stepped down as CEO in August after six years with the organisation.

The Autorité des Marchés Financiers has appointed Didier Deleage as deputy head of the asset management directorate. Deleage joins from Edmond de Rothschild Asset Management, where he worked for five years, most recently as CEO. Before that, he spent eight years as a member of the board of directors at the French Asset Management Association. At the AMF, Deleage reports to managing director Philippe Sourlas.


Swaps data: CCPs – a systemically important market infrastructure

Disclosures show heavy concentration of initial margin in top three clearing services, writes Amir Khwaja

With more than $1 trillion of financial resources backing cleared trades, and billions of dollars of cash flowing daily between members and clients, clearing houses today are systemically important market infrastructures.

The latest set of quantitative disclosures show $740 billion held as initial margin (IM) – the largest weapon used by central counterparties to protect against risk. Almost half of this is held as cash. Default resources account for another $210 billion.

We aggregate data for more than 50 clearing houses or services, ranging from global firms such as CME, Eurex, Ice and LCH to small regional firms such as AthexClear, KDPW and Keler. A clearing house can operate one or more clearing services and while the distinction is clear, to coin a pun, some disclosures are provided for a clearing house, while others are for a clearing service. So the 50 figure is an understatement if we refer to clearing services and an overstatement if we discuss clearing houses.

Futures and options together account for the largest IM amount by product type, but LCH SwapClear emerges as the largest single clearing service with $159 billion of IM. Clearing can be a lumpy business – the top three services together account for 43% of IM.

Crucially, estimated stress losses as a peak day amount for the default of two participants stood at $56 billion at the mid-point of the year – significantly lower than the $98 billion of prefunded default resources. Capital provided by clearing house owners, so-called ‘skin in the game’, totalled $16 billion – 7.6% of all default resources.

Initial margin

Let’s start with the largest financial resource, initial margin. We aggregate it for all the clearing services we have and separate into house, meaning member firms, and client, meaning firms that do not mutualise losses by contributing to the default fund (DF).

Figure 1 shows:

Next, let’s try to segment this IM by product type. We use product type in a loose sense, so we assign each clearing service to one product type. Often that is obvious from the name of the clearing service but other times we have to plumb for one as there is no split available.

Figure 2 shows:

The three largest clearing services by initial margin required as of June 28, 2019, were LCH SwapClear with $159 billion, CME Base with $97 billion and B3 (Bovespa) with $59 billion. These three represent 43% of the overall initial margin, a chunky share indeed.

Default resources

Next let’s look at the second-largest financial resource, the default fund resources that clearing houses require from members and their owners to mutualise losses. We aggregate all 50 services and show by type of default resource.

Figure 3 shows:

It would be easy to calculate ratios and say that own capital is a low percentage of aggregate resources and should be much higher, even a fixed percentage as proposed recently by a group of financial firms and commonly referred to as ‘skin in the game’. However I think that is a massive over-simplification of a complex topic and best left to an article where time and space permits a consideration of the arguments. One point to highlight is that prefunded resources total $99 billion or 47% of total resources, a healthy percentage and highlights the readily available funds to address defaults.

Cash for initial margin and default funds

In fact if we look at the disclosures of how much cash (not securities) is held by clearing houses for initial margin and default resources, we see the following:

Figure 4 shows:

A lot of liquid cash resources to utilise in the event of market stress and defaults, while a quick look at the largest holders shows that LCH SwapClear held $68 billion cash for IM required, while CC&G held $10 billion cash for DF contributions.

Stress loss

Given that we have highlighted the size of financial resources held by clearing houses, it is appropriate next to consider disclosures on credit risk and stress losses in extreme but plausible market conditions.

Table A shows:

Importantly the $56 billion figure is much lower than the prefunded default resources of $98 billion, let alone the total default resources of $210 billion.

Margin calls

And last but not least, let’s take a peek at the aggregate size of margin calls.

Table B shows:

That’s it for now, $740 billion of initial margin, $210 billion of default resources, average daily variation margin calls of $20 billion, oodles of cash; clearing houses really are a $1 trillion systemically important market infrastructure.

Amir Khwaja is chief executive of Clarus Financial Technology.

In-depth intro

Climate change spells death of certainty

Global warming threatens to upend everything risk models take for granted

After years of complacency, financial firms are finally getting serious about measuring their exposure to climate change, and taking action to mitigate its effects.

This is no easy task.

Beyond the usual intricacies and pitfalls of modelling, climate change brings with it the erosion of long-held certainties: predictable weather patterns in developed markets; steady sea levels in heavily insured jurisdictions; stable governments capable of maintaining fiscal discipline while spending trillions on climate defences.

All of which makes the job of risk managers – those tasked with measuring, modelling and putting a dollar value on climate risks, and enacting a plan to mitigate the impact – rather difficult.

Insurers and re-insurers know this first hand. The industry just witnessed its costliest back-to-back years, with losses totalling $227 billion in 2017 and 2018, according to data compiled by Aon. Last year, insured catastrophe losses hit $90 billion, the fourth-highest on record. Weather disasters, such as hurricanes Michael and Florence and Typhoon Jebi, accounted for nearly $89 billion of the total.

Usually, when underwriting a risk of this type, an insurer would rely on a catastrophe or cat model to estimate the frequency, intensity and possible damage footprint of a particular hazard. A nat cat model for hurricanes, for instance, looks at recorded instances of historical hurricanes in a particular geographic area and generates a range of estimates as to the extent of the damage a hypothetical future hurricane could inflict.

But cat models struggle to offer any kind of accurate gauge for events that are far more extreme than those witnessed before, or in a location they were never expected to occur – wildfires or typhoons that may owe their increase in frequency and severity to changing weather patterns being a prime example – because, at the outer limits of the tail, there’s no historical data to feed the model.

For the five largest cat events of 2018, the average loss estimates of the two main modelling firms – AIR Worldwide and RMS – came in at $14.25 billion, roughly 65% below the true loss figure of $40.3 billion.

Cat models struggle to offer any kind of accurate gauge for events that are far more extreme than those witnessed before, or in a location they were never expected to occur

The industry is now searching for more consistent and accurate ways to model losses arising from weather-related disasters. Several lines of inquiry are converging on the idea of combining decadal forecasting techniques – which compute climate fluctuations over multi-year periods – with orthodox stochastic models.

“The firm that merges decadal climate models into traditional stochastic natural catastrophe models the most quickly and credibly will be the winner,” says Alison Martin, chief risk officer at Zurich. “They will be able to say: ‘We can attribute X storm, X flood, X wind event to climate change’ – and the modelling would support it: ‘Here is the economic cost of climate change.’ No-one has done that yet, successfully. It’s a trillion-dollar question.”

Financial firms are also applying so-called ensemble techniques – an umbrella term for quantification methods that employ multiple models at once – to quantify climate exposures. The most common approach is to run cat models alongside general circulation models, or GCMs, which can be used to simulate various climate scenarios.

At the extreme, of course, insurers can stop underwriting risks they cannot accurately gauge: Argo, a large California-based reinsurer, decided it did not want to write casualty business for utilities in the state – just before 2018’s deadly wildfires struck. And banks could cease financing such risks.

But simply unbanking whole sectors and uninsuring whole jurisdictions is not what agents of risk transfer are supposed to do; if someone is willing to pay, mitigation should have a price. New modelling techniques could help financial firms to more accurately set that price.


HKEX outage zapped key hedge; now banks push for rule change

Dealers seek shutdown of CBBC market if futures go dark

It took markets by complete surprise. Not long into the trading day on September 5 what appeared to be a small glitch in the futures trading platform of the Hong Kong Exchange turned out to be a lot more serious, making it tough for issuers to price listed products fairly.

Unable to quickly fix a software bug that emerged during the morning of that Thursday, HKEX had no option but to suspend futures trading at 2pm. Trading did not resume until the following morning.

Futures are the main instrument that issuers use to price exotic barrier option products known as callable bull and bear contracts (CBBCs) – leveraged products used by retail investors to speculate on an index, such as the Hang Seng, or an individual stock. Without access to Hong Kong’s futures markets, issuers of these products struggled to offer quotes to investors and hedge risk.

“Many investors were confused and some reached out to the exchange to raise their concerns. But they understood when we explained that if there is no futures market, we have no pricing source for HSI products,” says Dick Chau, director for equity derivatives sales at UBS in Hong Kong, one of the banks that was forced to stop quoting.

Not all issuers pulled back from the market, though. A few pre-empted the shutdown and were able to continue quoting. This disparity reportedly left some investors unhappy because they were unable to transact certain contracts, and, therefore, felt at a competitive disadvantage. The episode has even led to calls for the exchange to introduce a blanket suspension of CBBC activity in the event of a similar market outage.

HKEX has not indicated any immediate plans to change its rules in light of the incident, and reiterated the rationale of its current policy in which products are only suspended if there is disruption to trading of the underlying index or stock.

“If the underlying is still live, products should be allowed to continue trading on the listed platform,” says a spokesperson at the firm. “The product issuers have the authority to decide if they would like to continue offering their product to the market.”

Hong Kong’s CBBC market is sizeable, with daily trading volumes in 2018 averaging around HK$7.5 billion ($1 billion), according to data from the Securities and Futures Commission and HKEX. The product represents about 10% of total trading activity in Hong Kong equity markets (see figure 1).

Each CBBC has a specified call price – an American knockout barrier – that, when triggered, leads to the automatic recall of the product, in a similar way to a stop-loss. The call price kicks in on a bull contract if the price of the underlying falls below a certain threshold, and vice versa for a bear contract.

Banks each issue their own CBBCs on indexes and stocks at different call prices and expiries. If an investor wants to trade out of a position before expiry, it can sell it back to the issuer at a price based largely on the so-called intrinsic value, which is the difference between the spot price of the underlying and the call price.

The pricing of a CBBC and an issuer’s risk exposure therefore changes as the value of the underlying changes. This became a problem for issuers on the day that HKEX’s futures market disappeared but cash equities on the exchange continued to trade.

Most issuers’ quoting engines rely on futures data to calculate prices on CBBC products as the spot price moves, and to place and unwind hedges as their exposures change.

Wider still and wider

The first indication of problems on September 5 came during the morning session of trading. Execution difficulties on HKEX’s futures platform caused CBBC issuers to quote wider spreads to account for the risk they were unable to offset with hedges.

“We were able to trade in the morning but had increasing difficulty trading in the futures market,” says UBS’s Chau. “As a result, we widened the bid/offer spread of the HSI products, which we hedge with Hang Seng futures, especially for CBBCs which have a high delta and are very sensitive to underlying price moves.”

Some hedged their delta risk – the sensitivity of their exposures to changes in the value of the underlying – by investing in TraHK, an exchange-traded fund that tracks the HSI. Others bought and sold offsetting products issued by competitors. Issuers say this is an expensive way to hedge, due to taxes and the need to frequently rebalance to stay delta neutral.

The fact that we were not able to use futures to hedge meant that we had to quote a bit of a risk price

Martin Wong, BNP Paribas

As the morning wore on it became ever more difficult for issuers to maintain liquidity and not exceed their risk limits, says Ivan Ho, head of warrants and CBBCs at Credit Suisse in Hong Kong.

“Even if you hedge using a basket of stocks or buy other issuers’ products there is still some slippage on your hedging,” says Ho. “Even if you buy a tracker fund, the bid/ask is about 0.4–0.6%, so there is a lot of noise on the delta. So after we sell the product we cannot offload the delta.”

Martin Wong, head of exchange-traded solutions for equity and derivatives in Asia-Pacific at BNP Paribas, agrees that continued difficulties in hedging pushed spreads wider across the Street.

“Obviously the fact that we were not able to use futures to hedge meant that we had to quote a bit of a risk price, but that was the case for everyone in the morning anyway,” he says.

Pricing hitch

The futures outage did not just hamper hedging strategies. It also affected how issuers calculate the prices they should be quoting for CBBCs.

Most listed product issuers use valuation systems that rely on data feeds of real-time futures prices. Issuers say this makes sense because futures ultimately converge to the price of the stock market at expiry, meaning the price of a CBBC will follow the futures market.

Any disruption to the futures platform would prevent issuers from accurately pricing CBBCs. At least two banks, including BNP Paribas, anticipated this problem during the morning session and decided to switch to reference spot prices for the afternoon session, reconfiguring their systems accordingly.

Since the relationship between cash and futures is relatively linear, the spot rate can be used as a proxy, albeit an imperfect one. But it enabled the banks to continue quoting prices, even after other issuers had stopped.

Some issuers want HKEX to amend its listing rules to protect banks and investors in the event of another futures outage

For other issuers, once listed products trading had resumed after the lunch break there was no time to recalibrate the system when futures trading was suspended at 2pm, says Credit Suisse’s Ho.

“For some of the issuers it can take up to half an hour to change the system, and that means it is really difficult to move it to another underlying after the market opens,” he says.

BNP Paribas’ Wong says the bank benefited from the decision to switch the data feed in its valuation system to the spot rate while it had the chance.

“We are lucky that our system could switch to spot, and we made that switch in the morning when we noticed the futures market was not reliable,” says Wong. “We were getting extra flows from people who still wanted to trade.”

Another issuer, which prefers to remain anonymous, says it was also able to continue quoting by using an algorithm that performs analytics on HSI stocks to generate synthetic futures prices.

“Our systems were not affected too badly in finding the fair value of the products,” says a derivatives trading head at the issuer. “It didn’t have a big impact.”


After HKEX announced the complete shutdown of the HSI futures market at 2pm, a majority of CBBC issuers were unable to find the fair value of their products and stopped providing quotes to investors. At least one – Societe Generale – suspended trading in its CBBCs altogether, citing the need to protect investors, taking them off the market until futures trading resumed.

“After 2pm, futures trading was halted. We not only encountered difficulties in hedging but the source reference price was also unavailable,” says UBS’s Chau. “As a result, many issuers were unable to provide liquidity for HSI products.”

Without the anchor of issuers’ quotes, the bid/ask spreads on the products were now left almost entirely to market forces.

The impact of this disruption is clearly seen in the market data, with CBBC trading volumes at HK$1.35 billion on September 5, down over 80% versus the year-to-date daily average of HK$7.26 billion.

The HKEX spokesperson says the outage was the first time the exchange has had to halt derivatives trading in the past 20 years, save for market-wide trading suspensions due to weather emergencies.

HKEX guidelines say that under normal circumstances issuers are expected to provide active quotes on their CBBCs. Many issuers include clauses in their listing documents that state they may decide to stop quoting when trading of options and futures contracts is limited.

But as there were no binding rules around what to do when the futures market goes dark, the outage resulted in discrepancies across the Street.

Some issuers simply stopped offering quotes, which meant their CBBCs could still be bought and sold with other dealers that were still live. One issuer, Societe Generale, prevented all trading in its contracts. Others were able to continue to quote. So, for example, investors could buy and sell UBS CBBCs at BNP Paribas during the outage, but not Societe Generale’s versions.

This has raised serious concerns that investors – particularly those in the retail segment – might not be getting a fair price. In the absence of issuer quotes, the prices of the products were being driven solely by supply and demand, with no anchor to the underlying price.

Some issuers believe HKEX should have suspended all trading on HSI CBBC products at this point – a move with both advantages and disadvantages for investors.

“If the futures market is not trading, it probably makes sense for all products [CBBCs and warrants] linked to the HSI to be suspended and for the knockouts to be overridden which is consistent with the arrangement that governs single-stock products,” says Chau. “It would be helpful for the exchange and the regulator to consider such an arrangement.”

HKEX is understood to have told issuers privately after the outage that they could suspend all trading on their products if required. But that too could cause problems for investors: suspension of a product means an investor cannot sell it to any other dealer if it needs to close a position.

HKEX let issuers decide whether they wanted to suspend their product and I think for investors that created a lot of confusion, with some products suspended and some trading in the market

Ivan Ho, Credit Suisse

Societe Generale’s decision to suspend trading was made to prevent investors trading at the wrong price, says Keith Chan, the bank’s head of Asia-Pacific listed distribution in Hong Kong.

“At the time we just felt it was the right thing to do,” adds Chan. “We have a hotline that investors can call – there were a lot of calls. We felt that to protect these guys from trading away from the fair price, we needed to suspend.”

Two bankers tell that investors were unhappy with the decision by certain dealers to suspend the products. The investors reportedly felt frustrated that they couldn’t exit their positions and were left at the mercy of market fluctuations. In the event, the index was flat for the day, limiting the damage.

Three issuers say the listing rules should be changed so that if another futures outage was to happen at HKEX, banks and investors would be protected. This can be achieved by ruling that all products on the market are automatically suspended, they say, with the stipulation that any knockout events occurring during the suspension are not observed.

“We think that the products should be suspended at the same time,” says one head of listed products. “So long as futures are suspended the liquidity we can provide is abnormal, and the chance of trading away from fair price is higher. If that is the case it is more likely the retail side will suffer.”

UBS’s Chau says another benefit of such a rule would be consistency, the lack of which frustrated investors when issuers stopped quoting after the outage.

“Market participants, the exchange or the regulator need to agree on the knockout rules should the futures market be closed,” he says. “If there is a suspension, all products should be governed by the same rule.”

Credit Suisse’s Ho agrees that perhaps the biggest problem on the day was inconsistency, as some issuers continued to trade and others did not.

“HKEX let issuers decide whether they wanted to suspend their product and I think for investors that created a lot of confusion, with some products suspended and some trading in the market,” he says.

What a difference a day makes

CBBC issuers say there was an element of good fortune in the timing of the futures outage. After falling roughly 0.4% in the afternoon, the HSI closed flat on September 5. Yet only the day before, the index had rallied by roughly 4% after Hong Kong’s leader Carrie Lam announced the formal withdrawal of the extradition bill that had sparked months of social unrest in the city.

A movement of that magnitude the following day, while futures trading was disrupted, could have been disastrous, issuers say, since it would have been impossible to unwind hedges on products that knocked out. At that point, delta-neutral positions would evaporate and a gain or loss on hedges would be determined by the direction of issuers’ exposures, says Dick Chau at UBS.

“If bull contracts knock out and issuers are unable to sell the futures in a falling market, they need to explore possible proxy hedges or else they would be facing bigger losses than expected,” he says.

A Hong Kong-based derivatives trading head at a listed products issuer agrees that a tranquil index that day was a key reason issuers escaped largely unscathed.

“That day was pretty quiet so we were lucky,” he says. “The market wasn’t moving around too much, so there were no big chunks of CBBC knockouts. If there was a big buy or selloff then it would have impacted not just us but the whole Street.”


Credit data: US slowdown starts to bite for high yield

Credit quality in the US is turning, while the UK is sliding sharply, writes David Carruthers

US economic growth has slowed down during 2019, coming in at 1.9% in the third quarter, compared with 3.1% in the first quarter. The decline is starting to be reflected in corporate credit quality as well. The creditworthiness of US high-yield corporates has declined by 3% since the start of 2019, while investment-grade corporates are flatlining.

That’s still a stronger performance than in the UK, where investment-grade and high-yield credit quality is in negative territory – investment-grade credit quality has fallen by around 5% since the end of 2018. Even the top 100 companies, which had been relatively resilient, have now started to slide since the second quarter.

This decline is unsurprising, as the economy in the UK is much more challenged than in the US. UK economic output fell 0.2% in the second quarter of 2019, the first such decline since the fourth quarter of 2012. As the third quarter was overshadowed by fears over the possibility of a no-deal Brexit and the final quarter of the year will now be dominated by the uncertainty of a general election campaign, it seems unlikely the economy will recover during the second half of 2019.

Recent high-profile bankruptcies at Thomas Cook and Mothercare have highlighted the weakness of retail-related sectors in the UK in particular. UK names are relatively heavily represented in Credit Benchmark’s numbers, so its deterioration is also reflected in global numbers – 23 consumer goods and 47 consumer services companies have suffered recent downgrades in the credit estimates compiled by banks.

Only a few sectors globally, such as healthcare, telecommunications and oil and gas have enjoyed upticks in creditworthiness in recent months. However, even the improvement in oil and gas masks diverging fortunes for different business models. Although oil prices have recovered somewhat since the start of the year, when US WTI crude dipped toward $40 per barrel, they are still as little as half the peaks of more than $100 dollars seen in 2014. Given the time lag to complete projects, some production coming online now may no longer make economic sense. This is reflected in the weaker performance for US-based exploration and production companies, compared with integrated oil and gas models.

Coupled with the slowdown in the US and UK, the trends in the eurozone and China point to a weaker global economic outlook going into 2020, all of which makes more speculative “growth company” investments look a less appetising prospect. Banks’ credit risk assessments on companies generating poor returns are turning sharply negative, reflecting that switch to companies with more dependable earnings profiles.

Global credit industry trends

The latest consensus credit data shows that credit activity for corporates and financials has slightly increased, with 4.8% of entities moving by at least one notch, in comparison to 4.0% last month. Figure 1 shows detailed industry migration trends for the most recent published data, adjusted for changes in contributor mix.

Figure 1 shows: 

North America oil and gas

In credit terms, the North American oil and gas industry has been on a rollercoaster over the past few years. Shale oil has changed the industry dynamics, but the North American oil and gas subsectors have usually shown very similar credit trends. This seems to be changing, with some important divergences appearing. Figure 2 shows the credit trend and distribution of 250 North American exploration and production and 160 North American integrated oil and gas companies.

Figure 2 shows: 

UK top 100

Since the 2016 referendum on EU membership, there has been a distinct divergence in credit trends for the largest UK companies – many of which are heavy US dollar earners, and can be proxied by the FTSE 100 index – compared with those with a more European or domestic focus. For the past few years, most UK corporate credit aggregates have shown a steady decline; but the aggregate for the top 100 has generally been improving. However, recent months show an ominous change. Figure 3 shows the credit trend and distribution of the UK top 100 companies.

Figure 3 shows: 

Value versus growth: credit risk and earnings yield

As this year’s string of failed or pulled IPOs demonstrates, investors are turning their backs on overvalued growth companies that focus on revenue instead of profit.

And after years of lacklustre equity returns, the value stocks have seen a return to favour.

Figure 4 plots the credit data for 1,246 US corporate borrowers, split into three valuation categories based on earnings yield.

Figure 4 shows: 

UK v US corporates by credit category

Credit risk for US corporates showed a steady improvement after the Trump tax cuts, but recent data shows this effect is tailing off and in some credit categories has already turned negative. The negative impact of Brexit-related uncertainty on UK credit risk is now well known, but the decline is not uniform across credit categories. Figure 5 shows recent credit trends of 1,500 and 1,000 UK and US corporates, with credit category IGb (the lower segment of the investment-grade category), respectively, as well as 3,000 and 1,500 UK and US corporates with credit category HYb (the upper segment of the high-yield category), respectively.

Figure 5 shows: 

About this data

Credit Benchmark collects monthly credit risk inputs from 40-plus of the world’s leading financial institutions, making it possible to follow credit trends across geographies and industries. In all, the dataset contains consensus ratings on about 50,000 rated and unrated entities globally.

David Carruthers is head of research at Credit Benchmark.


Robo-raters help banks vet vendors for cyber risk

Specialists tout service for monitoring third parties amid tougher rules on outsourcing risk

If you want to reduce the risk posed by third parties to your organisation, you hire another third party to police them.

This concept may not be intuitive, but cyber risk rating companies such as BitSight, RiskRecon and SecurityScorecard have made it central to their business proposition.

These companies are trying to offer an alternative to the staple methods of third-party risk management, where banks vet vendors using questionnaires, lengthy audits and site visits. Instead, the rating companies scrape the internet for any data that can help paint a picture of a third party’s cyber security defences and their vulnerability to cyber crooks.

Financial institutions are weighing up the service as they struggle to manage the risk posed by an intricate network of third parties. Many of those third parties themselves outsource to external vendors, creating a complex web of vendor relationships for banks to monitor.

“It’s risk management once removed, and it’s a problem the whole industry faces,” says Richard Downing, head of vendor risk management at Deutsche Bank in London.

Banks hoping for a magic bullet from cyber risk rating companies may be disappointed, though. There are questions over whether the ratings provide a sufficiently comprehensive measure of vendor risk. Some believe ratings can only ever complement, not replace, banks’ own internal vetting processes.

Regulators are well aware of the problem. The US Federal Reserve is focusing on vendor risk management as one of its supervisory priorities for the country’s largest banks, while the European Banking Authority has released stringent guidelines on outsourcing arrangements. The European Securities and Markets Authority plans to release its own outsourcing guidelines for financial firms not under the purview of the EBA next year.

The spectre of data loss is one of the biggest fears for risk managers, judging by’s annual Top 10 op risks survey, which in 2019 placed data compromise in the top slot for the first time. As well as the costs from reputational damage and customer remediation, data loss can also attract swingeing fines under Europe’s sweeping General Data Protection Regulation (GDPR) laws.

Know the score

Cyber risk rating providers employ big data techniques to gauge the cyber security capabilities of firms, scraping the internet for information that can provide clues as to a company’s resilience against hacks, outages and other threats. The data is aggregated and run through an automated program, which scores the data along preset parameters. These scores are weighted to produce a security rating. SecurityScorecard has a 100-point system and gives out grades on a scale of A to F, with a report card that highlights what actions can be taken to improve the grade. BitSight offers a rating on a scale from 250 to 900 points, similar to a credit score, and Risk Recon provides a score anywhere from zero to 10.

Broadly, these services monitor whether a firm’s systems are properly patched, the health of domain name systems (DNS), the security of a company’s network and other factors. Patching, or updating, the software used by companies is a basic but important way to avoid cyber breaches, experts say, as hackers can exploit temporary holes in security in unpatched software. DNS is the decentralised way in which entities are labelled on the internet, and companies must make sure to monitor their own DNS designations to avoid malicious activity – for example, attackers being able to affect internet traffic or impersonate a company’s email address.

Some of the cyber risk ratings apply a very good layer of analysis to the data they gather … But the data analysis of some providers can be of low quality, so can’t be used as a decision point in a risk assessment

Charles Forde, Allied Irish Bank

However, these services can go beyond just monitoring the perimeter of companies’ security infrastructure. SecurityScorecard also eavesdrops on web chatter about companies to determine if data has been leaked or if hackers are planning to launch a cyber attack on a target.

Similarly, BitSight boasts of having access to one of the largest cyber sinkhole infrastructures in the world, after acquiring a Portuguese cyber analytics firm in 2014. The sinkhole is a huge dragnet that intercepts fake URLs. Often, this type of malicious traffic emanates from groups of infected computers referred to as botnets. By accessing these botnets, BitSight, SecurityScorecard and other firms can track communications sent by the computers and obtain a worldwide view of the ebb and flow of infections. This can provide some important intelligence on the vulnerability of different firms to potential cyber attacks.

“Access to this sinkhole lets us know when malicious links are clicked, as our sinkhole intercepts the message sent back to the hacker,” says Jake Olcott, vice-president in communications at BitSight.

SecurityScorecard also says it uses cyber sinkholes to aid monitoring. The company’s vice-president of international operations, Matthew McKenna, says automation is important in enabling cyber rating firms to increase the range of vendors they cover. He claims the firm scores 1.1 million companies.

RiskRecon was unable to respond to requests for comment.

The breadth of coverage offered by rating providers may be a draw for multinational companies that need to set variable levels of risk tolerance depending on region or market.

Charles Forde, Allied Irish Bank

“Take a firm with an asset management business in the US and a wealth management business in Singapore,” says Charles Forde, group head of operational risk at Allied Irish Bank. “You will likely have a different risk appetite for vendors in these different regions so you can tailor your findings to each business. A score might be acceptable for one business but not another. That flexibility is useful.”

Cyber rating firms operate under a subscriber payment model. This sets them apart from their credit rating agency cousins, which use an ‘issuer pays’ model – a structure that some claim introduces perverse incentives into the rating process.

“Our business is similar to that of a conventional credit rating agency, but there are some fundamental differences.” says Olcott. “In the financial ratings market, organisations pay to be rated, which can lead to a significant conflict of interest. For us, any organisation can pay to get on the platform and see the ratings of hundreds of thousands of firms.”

Fast response

Proponents of cyber ratings claim the service offers a quick and easy snapshot of a vendor’s vulnerabilities compared with the traditional vetting procedure involving questionnaires and audits.

“These utilities become very cost-effective because while an audit or questionnaire of a vendor can take a minimum [of] four to six weeks, these cyber risk rating services give you an answer immediately,” says Amit Lakhani, the global head of IT and third-party risks for corporate and institutional banking at BNP Paribas in London.

Financial institutions have the option to outsource the questionnaire process using an external monitoring services such as KY3P from IHS Markit or the TruSight utility from large American banks.

Allied Irish Bank’s Forde proposes an alternative approach to screening new vendor relationships using cyber risk ratings instead of questionnaires. Banks could request and affirm basic information that would normally be included within a vetting questionnaire, as minimum contract standards with vendors. The kind of information could include whether a vendor has a chief information security officer who sets policies, or what are the processes for data encryption. For more technical details normally requested in a questionnaire, the cyber rating firms can come into play, providing up-to-date information on cyber security policies.

“Cyber risk rating services offer an instant response on technical vulnerabilities, issues with patching and encryption, among other risks,” says Forde. “This approach also extends to discovery and monitoring more deeply into the supply chain, covering fourth parties.”

Gaining a detailed picture of the supplier relationships among vendors is hard for a large institution that might have hundreds of individual outsourcing arrangements. Cyber rating firms are starting to offer analysis of the chains of connection among vendors, to show third and fourth parties.

“If your supplier is subcontracting to another supplier, then these rating agencies can provide you with a view of the number of fourth parties your supplier has,” says BNP Paribas’s Lakhani. “It is very helpful to see if all your fourth parties are converging to certain cloud service providers such as Amazon Web Services [AWS] or Microsoft’s Azure platform. This could change your view of risk if it is determined that many of your third parties would suffer if any of these services were to go down tomorrow.

He adds: “As an organisation, this helps because the EBA is very interested in seeing where risk concentrations exist.”

New guidelines from the EBA, released in February, provide detailed principles on how to manage outsourcing risk from third parties. Banks must maintain a comprehensive register of outsourcing relationships and closely scrutinise vendors based on their “criticality” to the functioning of the business. The rules go beyond the scope of the outsourcing guidelines released by the Committee of European Banking Supervisors in 2006, ramping up the compliance burden with regard to third and fourth parties, banks report.

As regulators finesse their guidelines for the management of third-party risk, their expectations for how firms tackle cyber risk are also taking shape. US regulators initially favoured a tough approach that would compel financial institutions to introduce a two-hour return to operations following a cyber attack. The proposal was shelved after industry criticism, but the Fed is pushing ahead with an initiative to set common standards for classifying and modelling cyber risk.

In Europe, the GDPR rules over data privacy introduced last year have forced all companies that handle personal data to overhaul how they use and store that information.

“Regulations are tightening in respect to third-party risk monitoring and assurance,” says McKenna from SecurityScorecard. “As an example, GDPR requires organisations to continuously monitor and understand third-party risk related to data privacy.”

The EBA’s focus on concentration risk is designed to ensure firms are not becoming overly dependent on the functioning of certain key entities. Cloud services such as Azure and AWS are under particular scrutiny by regulators, as banks and financial market utilities such as clearing houses outsource important functions to them.

Deutsche Börse, one of the world’s largest exchange groups, recently signed a deal with Microsoft, acknowledging that the deal allowed it to place services into the cloud that were “typically considered essential” for firms’ core businesses. The Options Clearing Corp has started a multi-year project to modernise business processes, including using the public cloud.

Cyber risk ratings could offer a way of sourcing information about fourth parties as companies adapt to the stringent new guidelines. It is unclear if firms will be able to negotiate rights of access to information on fourth parties, as required by the guidelines, according to Deutsche Bank’s Downing: “It’s something the industry is working on with vendors.”

“It is quite difficult to ask for third parties to grant us audit and access rights for fourth parties,” he adds. “It is still being debated as to what exactly the EBA guidelines mandate when it comes to fourth-party risk management.”

Data crunch

Third-party risk has a broader scope than the outsourcing of tech services. Large financial firms connect with many service providers that are not bound by outsourcing contracts and may be reluctant to divulge vital information.

William Moran, chief risk officer for technology at Bank of America, recently said important financial market utilities such as central counterparties often would not answer questions about their cyber security arrangements.

“They either won’t participate at all – that is, they won’t answer your questions – or they won’t let you do an on-site [inspection], or they basically cherry-pick which questions they want to answer,” he said at the Risk USA conference in New York in November.

[Financial market utilities] either won’t participate at all or they won’t let you do an on-site [inspection], or they basically cherry-pick which questions they want to answer

William Moran, Bank of America

Regulators that usually have privileged access to company information “don’t tend to be very responsive about what they’re doing in terms of cyber”, he added.

“I think the notion of having single, independent groups trying to evaluate vendors for things like cyber is good,” he said.

While the principle of cyber ratings may sound persuasive, successful application of the concept is a different matter. For rating firms that track hundreds of thousands of companies continuously, providing a consistent level of analysis on the data scraped from the internet is crucial. Some suggest the ratings firms are not always successful in this regard.

“The level of much of the detail provided by these services is quite good,” says Forde of Allied Irish Bank. “I think the challenge is you can’t use all these services in the same way. Some of the cyber risk ratings apply a very good layer of analysis to the data they gather, providing accurate conclusions. But the data analysis of some providers can be of low quality, so can’t be used as a decision point in a risk assessment.”

James Tedman, a partner at ACA Aponix, an operational risk advisory firm in London, agrees that the concept of cyber risk ratings is valid but that there will always be gaps in the coverage these kind of firms offer.

“An ‘outside-in’ approach is a useful complement to questionnaires in assessing and monitoring vendor risk,” he says. “However, you can only get to a subset of risk by using these cyber risk monitoring services.”

Tedman adds that a real-time service based on data will not offer insight into more qualitative factors such as the level of staff awareness of cyber issues in a firm, or how susceptible the company is to a fourth party with access to the network.

“These are the sort of risks that cannot be captured from the outside, and require on-site risk assessments or questionnaires,” he says.

In other words, firms would be foolish to rely solely on external ratings for a complete picture of third-party cyber risk. Banks may need to devise internal processes to complement the information gleaned from ratings. Deutsche Bank is doing so with its protective intelligence unit that looks through news items to determine threat levels from vendors. The bank is working to better link this function with what it calls a “vendor criticality matrix”, which tabulates the systemic importance of third parties to the firm.

“There is a broader industry push to both use third-party services that help bank monitor vendors, but also to develop internal systems that follow news items about those vendors,” says Downing.

Third-party risk encompasses much more than a cyber risk rating can cover. Take, for example, the reputational risk that may affect a firm if it uses a vendor with poor working conditions. In other areas of tech, such as manufacturing, companies have faced public criticism over employment practices – Taiwanese firm Foxconn a prominent example.

To get a complete view of vendors, firms will have to employ a mix of oversight strategies, of which cyber risk rating firms are one element. The machines are not quite ready to take over yet.

Correction, November 12, 2019: An earlier version of this article stated that the Office of the Comptroller of the Currency was working on a project to modernise business processes, whereas the Options Clearing Corporation is the organisation concerned. The article has been corrected.

Additional reporting by Tom Osborn

Editing by Alex Krohn


Quants bring ‘triptych’ of variables to risk measurement

Risk and portfolio managers at La Francaise and LFIS are squeezing more information out of stress tests

Value-at-risk and expected shortfall (ES) are ubiquitous in finance. They are used by banks and asset managers to estimate the risk of portfolios. Regulators use them to set capital requirements.

But the metrics have well-known drawbacks. Both VAR and ES are backward-looking, relying on the past to predict the future. The methodologies only consider returns and volatility, ignoring the underlying scenarios and factors that determine performance. And while they provide a reasonable measure of the risk profile of linear, long-only portfolios that invest in a single asset class, the results need to be taken with a grain of salt when dealing with more complex structures, such as those with hedges or non-linear payoffs.

To overcome these limitations, risk managers complement VAR and ES with stress tests to gauge the resilience of portfolios in adverse market conditions.

Quants at La Française Group and LFIS in Paris have taken that one step further, developing a forward-looking methodology they are calling an extended reverse stress test (ERST).

The new test provides not just a loss estimate, but also the specific scenario associated with the loss, expressed as a vector of values of all risk factors, coupled with a measure of how plausible the scenario is.

ERST was developed by Pascal Traccucci and Benjamin Jacot, global head of risk and quantitative risk manager, respectively, at La Française, working together with Luc Dumontier, head of factor investing, and Guillaume Garchery, the heads of factor investing and quant research, respectively, at LFIS.

They call their method a ‘triptych approach’ because it allows them, with only one of three variables – plausibility, loss and scenario – to derive the other two.

“The model is typically applied starting either from the scenario variable or from the level of loss,” says Traccucci. He explains that “managers often have VAR in mind, but not a scenario corresponding to it, so we set the VAR and the scenario” to help the investment decision process.

Normally, a risk manager has a maximum loss boundary and wants to know how probable it is, and what scenario might lead to it. Conversely, a portfolio manager might start with a scenario and want to know how big the losses could be and how probable that is.

Plausibility is a variable that quantifies, in units of standard deviation, the distance of the scenario under consideration to the average scenario, which is built as the vector of average value of each factor. It is calculated using the Mahalanobis distance, the span between two points in a multidimensional space – in this case, the vectors of risk factors. The calculation gives some idea of whether a scenario is plausible enough to be considered, or if it should be discarded.

The model is typically applied starting either from the scenario variable or from the level of loss

Pascal Traccucci, La Française

“We wanted to develop a tool that was useful to both risk and portfolio managers,” says Dumontier. “Indeed, portfolio managers can provide not only their worst-case scenario, but their best and expected case scenario as well and derive a plausibility level of both.”

A senior risk expert at a large global asset management firm praises the research as both innovative and practical.

“The matter of plausibility of scenarios is, in my experience, regularly brought up when discussing results of stress tests,” he says. “The method is very interesting and potentially of immediate use.”

ERST builds on reserve stress tests. Instead of measuring the portfolio impact of adverse conditions, reserve stress tests assume a loss and then try to determine the scenarios that could lead to it.

Crucially, the methodology can be applied to portfolios with non-linear, quadratic payoffs. This is important for La Française and LFIS, which manage several complex risk premia funds that employ non-linear, long-short strategies across multiple asset classes, sometimes using complex derivatives structures.

Research on the model started in 2016 in the risk management department. Soon, portfolio managers got involved, and La Française and LFIS have been using the model for a year now. The team says ERST is versatile enough to be applied in different contexts, including investments in real estate or private equity funds with illiquid assets. In the future, the methodology could be extended to stress-test not only individual portfolios, but the entire firm at the book level.


Swiss banks ask, how about a magic trick?

Banks pull off an accounting trick – with the help of their regulator

What if there was a way for a bank to conjure higher net interest income (NII), without raising rates for loans or cutting them for deposits? What if it could do so with just a flick of an accountant’s pen?

You would be wise to be sceptical, as Gotham City’s thugs were when Heath Ledger’s Joker asked: “How about a magic trick?”

But in this context, there’s no need for any Hollywood special effects, nor a psychopathic clown. Just enterprising bank managers and a willing financial regulator.

Two banks have already pulled off the trick: UBS and Credit Suisse.

On October 1, 2018, UBS began reporting in US dollars, instead of Swiss francs and UK pounds.

This resulted in an immediate uplift in group reported NII – of $300 million annually. Abracadabra! The bank also benefited from reduced foreign exchange-induced earnings volatility, even though a hefty chunk of its assets and liabilities are denominated in currencies other than the dollar.

This October, Credit Suisse emulated its Swiss peer by pulling off a similar currency switcheroo for its operational risk-weighted assets. As of the end of the fourth quarter, these would be denominated in dollars instead of Swiss francs.

The bank said the move was justified as the majority of its historic op risk losses (read ‘fines’) were incurred in US dollars. But it also meant the bank would hold more capital in dollars than francs, which, given the interest rate differential between the two currencies, and the effects of changing its capital hedging programme, will yield Sfr60 million ($60.2 million) of additional NII in Q4. The total benefit on a full-year basis is expected to be $250 million.

Neither bank could have pulled off the feat without their glamorous assistant, Finma. The Swiss watchdog signed off on both changes.

How can a change in reporting currencies lead to millions in additional earnings? It sounds like sleight-of-hand. Certainly some magical thinking is involved. In Credit Suisse’s case, by choosing to denominate op RWAs in dollars, it has to hedge the equity capital held against these by investing in dollar assets – which offer a tastier pick-up than Swiss franc investments. Essentially, it gave itself permission to invest more in higher-yielding assets.

Seeing through the illusions woven by financial reports is part and parcel of being an investor and, in these cases, shareholders have agreed to accept the changes as legitimate, rather than a case of accounting trickery.


Cat risk: why forecasting climate change is a disaster

Forecasters are poles apart on climate-driven catastrophes; insurers fear worse ahead

The year is 1987. The worst storm in centuries is about to sideswipe the UK with hurricane-strength winds. Notoriously – folklorically – BBC meteorologist Michael Fish addresses a viewer’s concerns: “Earlier on today, apparently, a woman rang the BBC and said she heard there was a hurricane on the way. Well, if you’re watching: don’t worry, there isn’t.”

The storm cost the insurance industry an estimated £2 billion ($2.5 billion). Although Fish claimed his comment was taken out of context, neither the storm itself nor the scale of losses it provoked was forecast by industry models. And the difficulty of modelling catastrophic events – cat risk – is getting more extreme with the march of climate change.

“We can’t quantify the impact of glaciers melting. As soon as you start modelling, you make assumptions. And some of those assumptions are fairly heroic,” says Swiss Re chief risk officer (CRO) Patrick Raaflaub. “That’s what reinsurance companies have to do for a living, but that doesn’t make us necessarily better at predicting outcomes.”

To help them with estimating the costs of cat risk to their business, the insurance industry relies on the expertise of cat modelling firms, the two most prominent of which are AIR Worldwide and RMS. They have the unenviable task of quantifying those loss estimates.

“There’s so much uncertainty in present-day risk,” says a leading climate scientist at one of the largest Lloyd’s of London reinsurers, addressing the difficulty of pinpointing those numbers. He points to the initial model-assisted loss estimates for Typhoon Jebi – which struck Japan and Taiwan in 2018 – of just “a few billion”. In September, AIR raised its current loss estimate to $13 billion – but others suggest these figures will continue to rise with time and analysis – a phenomenon insurers call ‘loss creep’.

“Every month, they’re getting higher and higher,” says the climate scientist. “And they’re all wrong.”

Regularly estimating loss levels significantly below actual loss values and disparities between the two firms’ estimates have worrying implications for insurers: that event impact is changing too rapidly to keep up; that event signals are too open to interpretation; or that the best-in-business firms are seriously diverging on their approaches.

Whatever the reason for the differences, the industry is in search of a solution to deliver more consistent and accurate ways of capturing potential losses arising from cat risk, and may turn to synthesised techniques as a way forward. It implies a lot more work for an insurer that historically consults both – and then decides on a middle way.

At a loss

2017 and 2018 were the costliest back-to-back years for insurers, with losses totalling $237 billion, according to data compiled by Aon. Last year, insured catastrophe losses totalled $90 billion, the fourth-highest on record. Weather disasters, among them hurricanes Michael and Florence and Typhoon Jebi, accounted for $89 billion of the total. In all cases, model predictions were significantly below actual losses (for more on historic losses in Japan, see box: A (very) brief history of cat modelling).

Looking across five major cat events of 2018, each of the two firms’ average estimated loss was well below the actual loss – at $12.75 billion in the case of RMS and $15.75 billion for AIR – an average of $14.25 billion – roughly 65% below the true loss figure of $40.3 billion.

In the case of Typhoon Jebi, losses estimated by RMS were between $3 billion and $5.5 billion, while AIR’s estimate was between $2.3 billion and $4.5 billion, which brings an average of AIR and RMS estimates to $3.825 billion. According to Aon, insured actual losses were $8.5 billion. So the two firms’ average estimates for Jebi were off by over $4.675 billion – more than 100%.

The least stark differential in the sample was for Hurricane Michael, in which actual loss amounted to $10 billion versus average estimates of $8.4 billion and $8 billion from RMS and AIR respectively. In the Woolsey Fire, they respectively estimated losses of $2.25 billion and $2.5 billion* on a $4.5 billion actual loss.

The research is saying that we might not expect more individual storms – but we may expect, globally, more intense storms

Pete Dailey, RMS

The disparity between the two firms’ estimates is also cause for concern – and central to the problem of effectively estimating cat risk. It suggests that the loss estimates being made in this field are in something of a state of disarray.

And as climate change advances, the gap isn’t getting any narrower. “Even Hurricane Dorian in the Bahamas this year, there’s no overlap in the loss estimates between RMS and AIR,” says the climate scientist. “So there’s this level of uncertainty.” AIR’s estimate is between $1.5 billion and $3 billion, while RMS puts it between $3.5 billion and $6.5 billion. He believes that future incidents are likely to be “an order of magnitude” greater.

Peter Sousounis, a meteorologist and the director of climate change research at AIR, says that modelling firms don’t always look at the same criteria. Two significant factors that AIR did not include in its Dorian estimates, he points out, are damage to infrastructure and ‘demand surge’ – the latter a phenomenon wherein repair and replacement costs are higher following large-scale disasters than they would be normally. A damaged roof, for example, might cost $X to replace on a normal day, but when there are 500 roofs with the same sort of damage in one geographic area, prices increase.

He says: “Given the devastation to Abaco, these factors could amplify losses significantly, and are probably largely responsible for significantly higher loss ranges.”

“The research is saying that we might not expect more individual storms – but we may expect, globally, more intense storms,” says Pete Dailey, a vice-president at RMS. The atmospheric scientist and meteorologist, who supervises RMS’s flood modelling, points out that hurricane-prone regions should begin to expect “fewer category ones and twos, but more threes, fours and fives – and those are the category of storms that produce much more loss”.

Asked whether AIR and RMS are responding to climate change differently, Sousounis says: “Our catastrophe models are founded on historical data, like most others. But we do not arbitrarily or indiscriminately incorporate all available data – at least, not with equal weight, and especially if those data show a long-term trend that can be attributed to climate change.”

The firm that merges decadal climate models into traditional stochastic natural catastrophe models the most quickly and credibly will be the winner

Alison Martin, Zurich

In AIR’s view, a 40-year window is the ideal in most circumstances, Sousounis argues, because climate change happens slowly: interannual variability, he says, can “easily” have an impact greater than climate change in “any given year”. As such, 40 years is enough to include variability without capturing “obsolete” climate data from the more distant past. “There are exceptions in either direction, of course,” he adds. “For example, our tropical cyclone models tend to include longer periods of data – but only because analyses have shown there is no detectable long-term trend in landfall activity.”

RMS did not respond directly to questions on the difference between the two firms’ estimates.

Cat model crisis?

There’s no doubt that anthropogenic climate change is making the jobs of the cat modellers significantly harder. Global warming produces a demonstrable increase in the incidence of extreme weather events. In light of such singular ecological disruption, the historical approach to cat modelling can seem dangerously optimistic or narrow. The technique certainly helps insurers evaluate the probability of the reoccurrence of events for which there is some precedent, but isn’t so useful when it comes to predicting the extraordinary.

Insurers use cat models to estimate losses from natural disasters such as hurricanes and earthquakes, and set premiums accordingly. The models are fed with data from historical records, which means they don’t account for the effects of climate change, which is resulting in more severe weather events.

Cat models often use stochastic methods as a starting point. Before losses can be estimated, stochastic processes are used to generate a large distribution of plausible catastrophe events and associated physical phenomena. These event distributions are based on expertise and whatever historical data is available for a given event type. Next, modellers simulate the impact of these hypothetical disasters on their known exposures. Exposure data might include geographic location, typical repair costs and the reliability of local infrastructure. In the last stage, models produce damage estimates based on the information they have been fed by their operators.

But Greg Shepherd, CRO at Markel, another of the largest underwriters at Lloyd’s of London, points out that cat models are less useful for forecasting severe natural catastrophes that are “far more extreme than we’ve seen before, or in a location where we never expected one to occur” because, at the outer limits of the tail, there’s no historical data to feed the model. A cat model would be unable to spit out an accurate dollar value for a hurricane caused by changing weather patterns striking London, for instance, because not enough losses of a similar type would have been inflicted on insured properties in the past.

That leaves reinsurers having to price business whose exposure could alter fundamentally over the coming decades, says Shepherd’s opposite number at a large European reinsurer. Claims could come due in 30 years or more – but in terms of realised losses, says the CRO of a large European reinsurer, “we only know when it happens”.

“Look at wildfires: we can say, with a very high degree of certainty, that climate change is having an impact on the frequency and severity of losses there. But there’s also a suspicion that climate change will have an impact on large hurricanes, for example. That is an area where we know exposures are increasing: there is more being built in exposed areas, and rising ocean levels mean even more areas exposed. But if you look at the actual frequency and severity of losses, so far, it’s a plausible suspicion, but no more than that. Where the risk is materialising very, very slowly and we have very few data points, it’s really hard to track whether your prediction is successful or not. That’s when you need more margin around your predictions: you can’t take aggressive bets.”

It’s basically a set of synthetic events – events that haven’t happened – that we can create over thousands of years

Tina Thomson, Willis Re

Alison Martin, CRO at Zurich, agrees stochastic modelling techniques are limited in usefulness for now. Every firm, she says, is working on merging decadal forecasting – estimating climate variations over a decade – with orthodox stochastic models. Anthropogenic natural disasters are now more visible than ever, and this burgeoning historical record may soon be more readily operational.

“The firm that merges decadal climate models into traditional stochastic natural catastrophe models the most quickly and credibly will be the winner,” she says. “They will be able to say: ‘We can attribute X storm, X flood, X wind event to climate change’ – and the modelling would support it: ‘Here is the economic cost of climate change.’ No-one has done that yet, successfully. It’s a trillion-dollar question.”

A (very) brief history of cat modelling

The emergence of catastrophe modelling in the late 1980s was a cause for cheer among insurance companies. Weather-related losses – such as those caused by the storm of ’87 – that had plagued businesses, in some cases leading to major insolvencies, seemed as if they would soon become a thing of the past. Through leveraging cutting-edge science and mathematics, the portfolio impact of natural disasters could be simulated, assessed and understood. Premiums could be adjusted accordingly. Physical risk could be given dollar figures with new confidence.

But, given that event catalogues are generally based on the recorded characteristics of pertinent incidents throughout history, the most disastrous event a history-fed cat model can simulate will only be as severe as the severest event in that record.

For this reason, cat models did not prepare insurance companies for the 2011 Tōhoku Earthquake, which produced losses far exceeding the projected probable maximum losses of most of the industry. While Japan is a notoriously earthquake-prone country, experiencing over 1,000 tremors of varying intensity every year, an event such as Tōhoku – a nine on the moment magnitude scale – was wholly unprecedented. It was the most powerful earthquake ever recorded in that part of the globe and the fourth-largest earthquake ever in recorded history. Thousands died as resultant tidal waves battered Japan’s islands, and aftershocks were felt as far as Russia.

“Nobody had considered a magnitude nine,” says Adam Podlaha, head of impact forecasting at Aon. “By definition, it could not be in the catalogues.” Munich Re, a large reinsurance company, estimated the insured losses caused by Tōhoku as $40 billion, while the World Bank said the total economic cost could reach over $200 billion. Swiss Re, another global reinsurer, stated that while the tremors themselves were within worst-case-scenario projections, the tidal behaviour and aftershocks following the quake constituted “blind spots” in the existing vendor models.

Tōhoku and events like it were dismissed as black swans – unanticipated super-outliers with extreme results – which, by definition, occur only rarely.

What is certain about climate change, scientists say, is that it will lead to climatic conditions where these black swans cease to look like such outliers.

Another world, another planet

New techniques could introduce more accuracy to estimating climate change-related losses.

“Standard actuarial techniques are simply not sufficient for natural hazards,” says Tina Thomson, a geomatic engineer and head of catastrophe analytics for Europe, the Middle East and Africa west-south at Willis Re, the specialist reinsurance division of Willis Towers Watson. There are, she says, simply not enough Tōhoku or Katrina-level events recorded for actuarial techniques alone to be applicable. As such, the insights created by a stochastic catalogue are seen as incredibly valuable.

Insurers are being spurred by regulators and think-tanks to start applying so-called ensemble techniques to their exposures – an umbrella term for model-based quantification methods that employ multiple models at once. The two cornerstones of this approach are the familiar cat models and general circulation models, or GCMs – vast, planet-scale climate simulations that are maintained by academic institutions and governments.

You get your weather distribution. But how do you know it’s the right distribution?

Maryam Golnaraghi, Geneva Association

A GCM, also known as an ‘earth system model’, is essentially a replica earth with a realistic meteorology of its own that responds to programmable stimuli. By adjusting various parameters, modellers can create any number of ‘what-if’ planets, each with their own climatic, oceanic and atmospheric conditions. The inherent differences between the fake earths and the original can be as large or small as the modeller wants. A researcher might decide they want to see what would happen to world weather if sea surface temperatures suddenly rose by 3%, for example, or if pressure began changing by tiny increments in the troposphere.

In most circumstances, the simulated disasters occur in step with scenarios set out by the Intergovernmental Panel on Climate Change (IPCC) – a set of potential warming outcomes in which the world has become a degree or two hotter than it is today. Using authentic parameters taken from recorded history, modellers watch to see whether the simulated earth produces consistent and believable outcomes – appropriately sized hurricanes, accurate tidal behaviour, regular temperatures, and so on – that are faithful to those that have been successfully documented on our real planet.

“It’s basically a set of synthetic events – events that haven’t happened – that we can create over thousands of years,” says Thomson. “Tens of thousands, hundreds of thousands [of] simulations of potential events. Then we can quantify the impact.”

“The utility of that is that it allows events that have not occurred in the historical record to actually ‘occur’,” says AIR’s Sousounis. “And that’s an important consideration when it comes to climate change.”

Even the most sophisticated modelling, however, can’t do much to diminish the uncertainty inherent in anthropogenic climate change. The nature of global warming and the lack of obvious collective action plans mean that financial firms have a near-endless quantity of competing voices to choose from on the topic.

“[The] IPCC has counted four basic scenarios,” says Sousounis. “And I’m guessing there are probably 10 times that number of general circulation models, and they do different kinds of experiments.” There are even more cat models than that, he continues: “Let’s take 40 models and four climate change scenarios – that gives us quite a number of output possibilities.”

Maryam Golnaraghi, director of climate change and emerging environmental topics at The Geneva Association, says a GCM generating consistent distributions is not proof alone that the model is feasible for use in constructing projections. A given GCM’s behaviour may be regular, but the simple production of a trend does not guarantee its accuracy to the real earth. It has to produce the correct trend. Proving that worthiness, she adds, is no small task: to demonstrate a model’s ultimate accuracy, a GCM will be tested against observable data.

“You get your weather distribution. But how do you know it’s the right distribution?” asks Golnaraghi. “You run it over and over – maybe 200 or 400 times – and you start to determine whether the model is giving you distributions that fall towards the same pattern.”

AIR and RMS both attest to using GCMs in various instances. AIR currently uses GCMs in modelling for hazards including flood and extra-tropical cyclones for the US, Europe and Japan. The firm uses GCM information to guide outputs from high-resolution numerical weather prediction models, which produce realistic simulations of precipitation systems.

The sweet spot is to find a period of record where we can capture a good representation of the current climate, as well as having a sufficient amount of historical record to represent the variability

Peter Sousounis, AIR

RMS uses GCMs extensively in its modelling work, according to a spokesperson. The events in the firm’s models for North American winter storms and European windstorms were generated by GCMs run in-house, and some elements in its Japanese typhoon and North Atlantic hurricane models are based on similar in-house simulations. It also uses simulations from the climate science community. Its medium-term hurricane rates are based on sea surface temperature projections created by the Coupled Model Intercomparison Project framework – one family of models used in informing the climate change reports issued by the IPCC.

RMS says that its work on future surge risk is based on sea levels taken from CMIP5, the fifth phase of the CMIP experiments. The firm says it uses hurricane rates from the same source when looking into future hurricane loss. GCM outputs are becoming more realistic, says RMS, and will play a larger role in catastrophe modelling in future.

The Goldilocks configuration

Outputs from GCMs are not taken at face value, however – before a given GCM’s projections can be established as trustworthy, they are subjected to a model validation. “The model is put through an extensive verification process against the past,” says Golnaraghi. “They try to replicate the past with the model to make sure that those numerous times they run it are actually going in the right direction. It’s extremely time-consuming and resource-intensive.”

A combined GCM and cat model approach could prove highly useful. GCMs measuring present-day climate risk can be compared with another set of models running climate change scenarios, and the differences between the outputs of the two groups can be evaluated.

“By comparing a climate change-conditioned model to a baseline model – a model that’s measuring the risk of climate change today – you’re given the sort of marginal effect of climate change,” says RMS’s Dailey. “That would be a test of the sensitivity of that risk to climate change, which can be measured for the industry as a whole – let’s say, all insured properties across the entire UK – or it could be run for a portfolio.”

So, despite the criticisms, the humble cat model is not set to be retired just yet. While it lacks predictive potential of its own, it can be used to make sensible estimates about the unknown with a little help. By using more than one type of model concurrently, insurance firms can plan for a range of potential climate change impacts – that is, plan for the realistic consequences of events that have not yet occurred in recorded history.

Historical catalogues, meanwhile, improve every year as record-keeping becomes more and more sophisticated. Modellers themselves are also largely in agreement with regard to how records-based cat modelling should be practised.

“The sweet spot is to find a period of record where we can capture a good representation of the current climate, as well as having a sufficient amount of historical record to represent the variability,” says Sousounis. “[For] most of the models we’ve built, we use the last 30 to 40 years of record.”

For AIR, this is the Goldilocks range, says Sousounis: timescales that are too short risk the misinterpretation of quasi-periodic and naturally existing climate cycles such as the El Niño-Southern Oscillation and the Atlantic Multi-decadal Oscillation, which are large enough to cause measurable changes in global temperatures and hurricane activity; and selection of timescales that are too long will start to include data that is of relatively poor quality.

Dailey says this topic in particular is extremely hot among RMS clients: “There’s absolutely been a pickup in the interest level. In 2017, we saw hurricanes Harvey, Irma and Maria, all in a row. In 2018, hurricanes Florence and Michael, and then just this year, Hurricane Dorian. We’ve had three years in a row where major hurricanes have produced major losses in highly insured areas. We’re engaged with our clients every day on climate perils, but outside of our traditional market – and even beyond capital markets – individual corporates are very much interested.”

Corporate interest and action, says The Geneva Association’s Golnaraghi, are of crucial importance if the problem is to be tackled in time. She argues that the financial industry at large must engage productively with climate and cat modelling, enhance its understanding of the work being conducted and devote significant resources to upskilling its leaders. Without mobilising in this way, she says, the decisions made will remain based on poor understanding of a complex topic.

But if the industry manages to sufficiently focus on the issue, perhaps it would help modellers find a solution with more precise results. One that is just right.

Regulatory guidance on the way?

Although the Bank of England (BoE) has not yet taken decisive steps to regulate climate-related financial risk, it is encouraging banks to start thinking about the issue. The most significant action to date was the Prudential Regulation Authority’s supervisory statement in April – a formal set of rules and policy expectations. But the 16-page document is light on practical detail. It encourages financial firms to “consider” climate risk and “embed” it into existing financial risk management practices without prescribing how. The statement sets out the importance of stress-testing, scenario analysis and disclosure procedures with some clarity, but does not provide a firm set of standards, principles or directions for implementation.

The PRA has also said that firms must assign individual responsibility for climate risk management under its flagship Senior Managers and Certification Regime – but with climate risk management nebulously defined, responsible individuals will have to await further instruction. The same is true for insurers.

“A lot of vendors have responded to the PRA, but where we’re going, exactly – the road map – is up to them,” says a senior climate scientist with one of the largest Lloyd’s of London reinsurers. He goes on to discuss the BoE’s stress-test scenarios: “They described the scenarios they would like submitted. Our understanding is that it’s not something that is compulsory ... to be used to measure capital resilience.” He confirms that his firm wants to make progress on climate risk, and will be taking part in the tests.

Many insurers that spoke to for this article echo these sentiments, agreeing that the PRA’s attention to climate risk – while positive and a clear signal to other regulators – has not yet resulted in granular requirements. But it’s a start, they agree.

“It will drive changes in behaviour,” says Tina Thomson, a geomatic engineer and head of catastrophe analytics for Europe, the Middle East and Africa west-south at Willis Re, the specialist reinsurance division of Willis Towers Watson, of the current regulatory stance. “The PRA has collaborated with a number of industry experts to define these initial scenarios, and they’re in line with UK climate projections. However, insurers still need to look at how they apply the PRA stress tests to their portfolios, and that is where we have been assisting our clients with the application of the scenarios.”

Additional reporting by Tom Osborn


Plumbing problems in the repo market

On September 17, three banks may have sucked up nearly a quarter of money market fund cash

Technically, the repo market doesn’t exist.

The term is shorthand for a stack of distinct – but connected – markets, in which participants do essentially the same things, subject to subtly different rules and structures.

Those subtle differences matter, as September 17 illustrated. On that day, repo rates spiked to 10%, forcing the Federal Reserve to pump liquidity into the market in a programme currently set to extend into January.

It’s a crisis that defies simple explanation, but one ingredient that has largely escaped attention is the sponsored repo clearing system run by the Fixed Income Clearing Corporation, which threw open its doors to money market funds (MMFs) in 2017.

As a result, bank members of FICC can take in cash from the market’s biggest lenders and pass it to repo borrowers in back-to-back trades that are all funnelled via the central counterparty – a structure that allows the offsetting transactions to be netted under the terms of the leverage ratio, dramatically reducing the capital consumed by the business.

It has been a big hit. At the end of August, nearly a quarter of US Treasury repo loans made by MMFs – roughly $190 billion – were cleared, compared with practically zero at the start of 2017. This makes FICC not only the largest repo counterparty today, but the largest repo counterparty ever.

Ironically, a service designed to solve the leverage ratio problem and take the pressure off dealer balance sheets may have concentrated liquidity in the hands of a small group of sponsor banks and approved borrowers

But the FICC’s sponsored clearing service comes with a few conditions attached.

First, in order to participate, cash lenders and borrowers must be sponsored by an existing FICC member. As of September 17, only three banks sponsored money market funds for clearing – Bank of New York Mellon, JP Morgan and State Street. And initially, money market funds were only permitted to lend cash to their sponsor banks. Despite a subsequent relaxing of the rules, those sponsors continued to suck up most of the cash – and an exclusive group of approved borrowers was its end destination.

Ironically, a service designed to solve the leverage ratio problem and take the pressure off dealer balance sheets may have concentrated liquidity in the hands of a small group of sponsor banks and approved borrowers, contributing to the melt-up in repo rates on September 17.

“Back when banks were limited by balance sheet restrictions, large money market fund investors would be forced to transact with many different counterparties,” says a repo trader at a broker-dealer that is a member of FICC. “Now, they just give all their cash to banks via the sponsorship programme.”

These bottlenecks may prove to be transitory. Mizuho Securities became a sponsoring member on October 15 and understands at least 12 more sponsors have been approved. The number of sponsored cash borrowers has also grown since September 17, with the likes of hedge funds Lighthouse Investment Partners, MKP Capital Management and Rimrock Capital joining the pool of eligible borrowers.

As it expands, the service could make repo more robust, facilitating the smooth flow of cash and collateral between buy-side lenders and borrowers.

But, in its infant state, the programme is thought by some participants to have contributed to the market’s fragility on September 17.