OutOfLine – A Memory-Locality Pattern for High Performance C++

In my time at Headlands Technologies, I’ve gotten the opportunity to build some utilities that have improved the ergonomics of maintaining high-performance C++ codebases. This article will give a generic overview of one of those utilities, OutOfLine.

Let’s start with a motivating example. Suppose you have a system that opens a very large number of paths. Maybe they are files, maybe they are named UNIX sockets, maybe pipes. But for whatever reason, you open a lot of file descriptors at startup, then you do a lot of processing on those descriptors, and finally when you’re done you close the descriptors and unlink the paths.

An (abbreviated) initial design might look like this:

And that’s a nice, logically reasonable design. It RAIIs the close and unlink for you. You can allocate a big array of these things, operate on them, and they clean up after themselves when the array’s lifetime ends.

But what about performance? Suppose you use the fd very often, and you use path only when cleaning up the object. Now we have an array of 40B objects, and our critical path only ever uses 4B of that. Which means you’ll see more cache line misses as you keep having to “skip over” the 90% overhead.

One very common solution to this is to switch from array-of-structs to struct-of-arrays. And that would net us our performance win here, but it would cost us the RAII. Is there a way to have the best of both worlds?

One initial compromise might be to not store a std::string which is 32B, and instead store a std::unique_ptr – which is only 8B. That takes your object size down from 40B to 16B, which is a big win. But it’s not as good as parallel arrays.

OutOfLine is a tool that you can use to keep RAII, and move your cold members completely outside your object with zero space overhead. You use OutOfLine by inheriting from it, like this. It is a CRTP base class, so the first template argument should be the child class that is inheriting. The second argument is the cold data that should be associated with each “fast” object.

And what does that class look like itself?

The implementation is based on the idea of a global map hiding somewhere that maps pointers to fast objects to pointers of cold objects.

You can build this base from anything you can build your cold data from. And when you do, it’ll create that cold data and associate it with your fast object.

When your fast object gets cleaned up, the corresponding cold object will too:

When you move your fast object, the corresponding cold object is reassociated with the new fast object (remember that means you shouldn’t use the cold data on a moved-from object).

The current implementation just makes OutOfLine non-copyable for simplicity, but one could instead choose to implement copy construction by copying the cold data.

Now for this to be useful to us though, it has to actually be convenient to access that cold data. When you inherit from OutOfLine, your class gains const and non-const member functions cold():

Calling these gives you a reference (of appropriate constness) to your cold data.

And that’s it. This UnlinkingFD will be only 4B large, it provides access to the fd member at full speed, and it still preserves all the same RAII behavior. All the lifetime-related work is handled for you. When you move the fast object, the cold object is reassociated to the new fast object. When your fast object goes away, the cold object does too.

Sometimes though, your data conspires to make your life difficult, and the fast data must be constructed first because it is a constructor argument to the cold data. That makes the construction order the reverse of the order OutOfLine imposes on you. Also, sometimes you need the fast data to outlive the cold data (maybe the cold data holds a reference to the fast data). For these cases, we need an “escape hatch” to control the order in which data is initialized and deinitialized.

There is another constructor of OutOfLine that your class could call, one that accepts the tag type TwoPhaseInit. If you build your OutOfLine in that way, your cold data will not be initialized, and you’ll be left in a half-constructed state. You then finish your two-phase construction by calling init_cold_data (with any arguments from which you can construct a ColdData) and you’ll be done. Just remember not to call .cold() on an object that has not yet had its cold data initialized. And the parallel holds too – you can release your cold data early if your data requires it by calling release_cold_data.

And that’s all of it. So ultimately, what did our 29 SLOC buy us? It bought us one more option in the space of tradeoffs. Any time you have an object where some members are drastically more important than other members, you might consider OutOfLine. It lets you make some members a little bit faster at the expense of making accesses to other members a lot slower, so you would reach for this in situations where that sounds like a good tradeoff to you.

We’ve been able to apply this technique in several places – it’s fairly common to want to tag fast data with extra metadata that is logged out on shutdown, or in rare or unexpected situations. Whether that’s recording which user this connection belongs to, which internal trade desk this order is attributed to, or the handle to a hardware-accelerated market-data session – this will keep your cache lines clean while you’re in your critical paths.

I’ve included a benchmark that you can use to see and explore the differences.

Scenario Time (ns)
With cold data in-line (original) 34684547
With cold data thrown away (best-case scenario) 2938327
With OutOfLine 2947645

I measured an ~10x speedup by using OutOfLine. Obviously this benchmark is contrived to provide the best-case-scenario of OutOfLine, but it serves to demonstrate that cache optimization can have very real performance impact, and that OutOfLine really does deliver on that front. And keeping your data cache clear of cold data can also have a difficult-to-measure holistic benefit on the rest of your code. As always, you need to measure each application to optimize it, but this might be a useful tool to have in your belt.

Successfully Starting a Career in Quant Research

People setting out to pursue a career in the quantitative trading industry won’t always know what questions they should be asking of prospective employers. As a result, they are susceptible to making a mistake in choosing their first job. That mistake could lead to later giving up on the industry, when in fact you may be well suited for success. Without experience, it is easy to make an error; but I want to provide some guidance to help you avoid some common pitfalls.

Here’s a checklist of key structural questions you’ll want to understand before choosing your first firm. These questions will help you understand how well you’ll be able to learn and grow in a given company. You’ll want to learn and grow throughout your entire career, but it is especially important at the beginning when you have the least experience.

Ideally, for each of these seven questions you’ll want your first company’s answer to be “no”. “Yes” answers indicate additional challenges for you as you build your career. “Yes” answers don’t necessarily indicate a bad job, but they do indicate additional layers of risk. Below the list, I’ve provided some additional context.

 

  1. Are brainteasers, gambling, poker, or mental math questions used in the interview process?
  2. Will you have a 2 year noncompete?
  3. Will you be blocked from accessing any part of the source code?
  4. As a researcher, will you be the primary on-call trader monitoring any live trading processes?
  5. Will you be blocked from viewing the PnL of any strategies that utilize your research?
  6. Are strategy parameters manually changed based on judgment calls during the day?
  7. Are there other employees in the company in direct competition with you?

 

  1. Are brainteasers, gambling, poker, or mental math questions used in the interview?
    If a company asks these types of questions, it is potentially a sign they value manual, non-automated trader intuition and decision making more than quantitative, algorithmic, and research-driven approaches. If your background is quantitative, you will want a company that will value those skills the most highly. At ‘trader’-centric firms, quants have less influence over the trading strategy and more limited career prospects.
  2. Will you have a 2 year (or greater) noncompete?
    Non-compete agreements are a fact of life for quants working in the trading industry, but the lengths of those agreements vary widely (the standard term is one year). Non-competes exist to help protect the intellectual property you’ll be developing. However, one year should be sufficient to protect your work. Longer terms can sometimes be designed not to protect the company as much as to limit the employee’s career options. Even if you join a good company, there’s always a chance of ending up in a weak team, or with an inexperienced manager, being paid unfairly, or simply not fitting in perfectly. With a 2 year noncompete, other companies may be much less willing to hire you.
  3. Will you be blocked from accessing any part of the source code?
    Some companies encrypt or password-protect parts of their source code. This goes beyond taking adequate steps to protect proprietary property, like securing an internal filesystem from external intruders or preventing employees from copying files off the company network- these companies even prevent their own full-time employees from seeing parts of the existing codebase. To people outside the industry the idea of encryption might sound sufficiently strange that they wouldn’t think to ask. However, it is actually quite common in quantitative trading firms. Having parts of the source code blocked limits your ability to learn, collaborate with coworkers, and make an impact.
  4. As a researcher, will you be the primary on-call trader monitoring any live trading processes?
    Companies that highly value research will have separate dedicated operations and trading teams to handle the majority of the routine day-to-day tasks of running and monitoring an automated trading system.
    At some companies, often those with roots as floor traders or click traders, or those that struggle to manage unstable operations processes, ‘quant traders’ are expected to do both research and operations. While this might seem exciting at first, monitoring live trading, monitoring system health, and ensuring system functionality is a full time responsibility and will severely reduce your time available to concentrate and do high quality research.
  5. Will you be blocked from viewing the PnL (revenue) of any strategies that utilize your research?
    One of the main attractions of working in trading is the fast feedback you get on your research. You can think of an idea, implement the idea, and then see the results within a few days. This tight feedback loop compares favorably to, say, a Physics department, where a single idea could take years to validate. However, some companies separate alpha signal researchers from strategy developers. If you’re separated from the trading PnL, then you can’t get real-time feedback. Companies might do this to prevent their secrets from leaking out easily, but there are plenty of successful companies that trust their employees and encourage loyalty in other ways.
  6. Are strategy parameters manually changed based on judgment calls during the day?
    It’s almost impossible to do proper statistical analysis of your ideas on historic data if ‘click-traders’ are tweaking parameters, because you can’t model the human element. That kind of company is a good place to be a ‘click-trader’ – not a quant.
  7. Are there other employees in the company in direct competition with you?
    Okay, final question! This is important to ask, because some companies have employees or teams directly competing with each other so that the company is diversified in its revenue streams. But for your career, you want your company to invest fully in you. If there are competing teams, you don’t know whether you’ll end up on one that eventually wins- or loses. There will also be fewer opportunities to learn from others; you don’t get the benefit of collaboration with the widest group of other quants. Finally, this type of structure tends to foster a cut-throat, ‘zero-sum-game’ culture.

I hope you can get clear ‘yes’ or ‘no’ answers on all 7 of these questions from all your prospective employers. Consider a vague or indirect answer as a ‘yes’. Don’t let yourself be persuaded just because you’re less informed than you will be later in your career. Finally, it is never a bad idea to double check with friends or classmates that are now in the quantitative trading industry. Forewarned, you can avoid these 7 problems that I’ve seen through my friends’ experiences, and get to work on the fascinating problems tackled in quantitative trading.

Quantitative Trading Summary

This summary is an attempt to shed some light on modern quantitative trading since there is limited information available for people who are not already in the industry. Hopefully this is useful for students and candidates coming from outside the industry who are looking to understand what it’s like working for a quantitative trading firm. Job titles like “researcher” or “developer” don’t give a clear picture of the day-to-day roles and responsibilities at a trading firm. Most quantitative trading firms have converged on roughly the same basic organizational framework so this is a reasonably accurate description of the roles at any established quantitative trading firm.

The product of a quantitative trading company is an automated software program that buys and sells electronically traded securities to make a profit. This software program is supported by many systems designed to maintain and optimize it. Most companies are roughly divided into 3 main groups: strategy research, core development, and operations. Employees generally start and stay in one of these groups throughout a career. This guide focuses on strategy research and core development roles.

Primary job requirements:

  • Strategy research (‘research’): Programming, statistics, trading intuition, and the ability to understand market data
  • Core development (‘dev’): Low-level software engineering, networking, and system architecture

The software components of a quantitative trading system are built by one of these two teams. The majority of the components are built in-house at most major trading firms, so below is a list of the programs you could expect to build or maintain if you were on the research or dev teams. Each of these programs can be a separate process, although we’ll discuss some variants later.

Programs for live production trading:

  1. Market data parser: Dev. Normalizes each exchange’s protocol (including different versions over time) into the same internal format.
  2. Trading strategy: Research/Dev. Receives normalized data, decides whether to buy or sell.
  3. Order gateway: Dev. Converts from internal order format to each exchange’s order entry protocol (different than the market data protocol).

Programs to support live production trading:

  1. Monitoring GUI: Dev. GUIs used to be important for click traders but are now mainly used to monitor that the trading system is performing appropriately. They are still occasionally used to manually adjust a few parameters, such as overall risk tolerance.
  2. Drop copy: Dev. Secondary order confirmation to make sure you have the trading positions you think you do.
  3. Market data capture: Dev. Record the market data in parallel to what’s going into the strategy to verify later that the strategy behaved as intended and to run statistical tests on historical data (live capture is more reliable than purchasing from a vendor so most major firms avoid buying data from a vendor).
  4. Startup scripts: Dev/Operations. Launch all these different software programs in the right order and at the right time of day each time they need to be restarted (typically daily or weekly), and alert or recover from startup problems.

Programs to optimize and analyze the trading strategy:

  1. Parameter optimization: Research. Regressions or other metrics to help people compare one trading strategy parameter setting to another to find the best.
  2. Production reconciliation: Research. Metrics to confirm that the state in the trading strategy’s internal algorithm matched calculations using captured market data.
  3. Back testing simulator: Research. Shows estimated trading strategy profit or loss on historical data.
  4. Graphing: Dev/Research. Display profit or loss, volume, price and other statistics over time.

In a ‘typical’ established quantitative trading company the department breakdown would be:

  • Research
  • Dev
  • Back office and operations
    • operations/monitoring
    • telco/networking/hardware
    • accounting/HR
    • management/business development
    • legal/compliance

Since we won’t focus on them later, here’s a brief description of the latter groups:

  • Operations/monitoring: Monitor strategies and risk intraday and overnight to ensure there are no problems (like Knight Capital’s +$400m loss).
  • Telco/networking/hardware: Purchase and rack servers, configure switch firmware, operating system settings, and network interface cards or FPGAs, connect co-located datacenters (possibly in different countries), etc.
  • Accounting/HR: Like any business there is tax, accounting, and human resources work
  • Management/business development: There’s a lot of legwork to trading multiple exchanges around the world, such as finding contacts in other countries, negotiating fees, licensing telecom networks, and keeping ahead of new updates.
  • Legal/compliance: Trading is one of the most regulated industries. There are US and international regulators (SEC, CFTC, FCA, etc), huge and diverse rulesets such as MIFID, industry regulating agencies like FINRA, and exchange-level self-regulatory regimes (CME, NYSE, etc) that each have their own rules. Ensuring and documenting compliance with each set of rules takes a lot of work.

Some of the key differentiating factors between quantitative trading companies are:

  1. How they divide up research teams – internal collaboration vs competition between siloed research/trading teams.
  2. Which exchanges and products they focus on.
  3. What type of trading strategy is used and how it’s optimized.

Although we are unable to explain how each specific company divides up their research/trading teams, the overall structure and organization of employees and software at most major quantitative trading firms follows a similar general pattern to what was described above.

Trading strategies

Now that you have a high-level understanding of what a typical quantitative trading company does and the different roles that exist, let’s go into more detail about trading strategies.

The industry has generally settled on three main types of strategies that are sustainable because they provide real economic value to the market:

  • Arbitrage: Arbitrage and its economic benefits have been well understood for quite some time and documented by academia. The companies that are still competitive in arbitrage have one of 3 advantages:
    • Scale: To determine that some complex option or futures spread products are mispriced relative to a set of others, nontrivial calculations must be performed, including the fee per leg, and then the hedged position has to be held and margined until expiry. Being able to manage this and have low fees requires scale.
    • Speed: Speed either comes from having faster telco or being able to hedge. For example, triangular arbitrage on FX products traded in London, NY, and Japan and are a major impetus for the Go West and Hibernia microwave telecom projects. Arbitrageurs rely on the speed of their order gateway connections so they can hedge on related markets if they are overfilled.
    • Queue position: Being able to enter one leg of an arbitrage by passively buying on the bid or selling on the offer reduces costs by not having to cross the spread on that leg, so being able to achieve good queue position can give an edge in arb trades.
  • Market Taking: Placing a marketable buy or sell order to profit from a predicted price change. The economic value market takers are being paid for is either:
    • properly pricing the relative value of related securities
    • trading and thereby contributing to price discovery in products after observed changes in supply and demand

    Like a real estate negotiation that can change a deal’s value minute-by-minute when negotiators come to the table face-to-face and discover each other’s positions, even though the fundamentals of the deal or real estate market certainly don’t fluctuate by the minute, market takers are the high-stakes-mediators of the trading world. Market taking requires predictive signals and relatively low-latency because you pay to cross the spread. A common low-latency market taking strategy would be to attempt to buy the remaining liquidity at a price after a large buy trade. Some firms have FPGAs configured to send orders as soon as they see a trade message matching the right conditions (more on this later).

  • Market Making: Posting passive non-marketable buy and sell orders with the goal to profit from the spread. The economic value market makers are being paid for is connecting buyers and sellers who don’t arrive at the market at the same time. Market makers are compensated for the risk that there may be more buyers than sellers or vice versa for an extended time, such as during times of market stress.

Basic trading system design

A quantitative trading system’s input is market data and its output is orders. In between is the strategy algorithm.

Input

The input to a trading system is tick-by-tick market data. The input is handled in an event loop. The events are the packets sent by the exchange that are read off the network and normalized by the market data parser. Each packet gives information about the current supply and demand for a security and the current price. A packet can tell you one of three things:

  • A limit order was added to the book. Primary fields: {price, side, order id, quantity}
  • A limit order was canceled. Primary fields: {order id}
  • A trade occurred. Primary fields: {price, aggressor side, quantity}

For example, a few packets look like this (for a more detailed, real example see here):

AddOrder { end_of_packet: 1; seq_number: 103901; symbol_id: 81376629; receive_time: 13:03:46.304606537089; source: 1; side: S; qty: 1; order_id: 210048618; price: 99.25; }

CancelOrder { end_of_packet: 1; seq_number: 103900; symbol_id: 81376629; receive_time: 13:03:41.863834923132; source: 1; qty: 0; order_id: 210048542; price: 99.00; side: S; }

Trade { end_of_packet: 1; seq_number: 103902; symbol_id: 81376629; receive_time: 13:03:46.304606537835; source: 1; aggressor_side: B; qty: 1; order_id: 210048321; price: 99.00; match_id: 20154940; }

If the trading system adds up all the AddOrder packets and subtracts CancelOrder and Trade packets, it can see what the order book currently looks like. The order book shows the aggregate visible supply and demand currently available at each price. The order book is an industry-standard normalization layer.

When you add up all the orders, the order book could look like this:

Sell 10 for $99.25

Sell 5 for $99.00 (best offer)

Buy 10 for $98.75 (best bid)

Buy 10 for $98.50

This is the main view of the market data input used by the strategy algorithm.

Strategy algorithm

To put into practice what we discussed above, let’s outline a market taking strategy utilizing what is often referred to as market micro-structure signals that may have made money back before quantitative trading became very competitive. Some companies have each member of their intern classes program a strategy like this as a teaching project during a summer. This strategy calculates some signals using the order book as input, and buys or sells when the aggregate signals are strong enough.

Market microstructure signals

A signal is an algorithm that takes market data as the input and outputs a theoretical price for a security. Market micro-structure signals generally rely on price, size, and trade data coming directly from data feeds. Please reference the order book state provided previously as we walk through the following signal examples.

  • A basic signal, likely used in some form by most firms, is ‘book pressure’. In this case book pressure is simply (99.00*10 + 98.75*5)/(10+5) = 98.9167. Because there is more demand on the bid, the theoretical price is closer to the offer than the bid. Another way of understanding why this is a valid predictor it is that if buy and sell trades randomly arrive in the market on the bid and offer, then there’s a 2/3 chance of them filling the entire offer before the entire bid, because it’s 2 times bigger, so the expected future price is slightly closer to the offer than the bid.
  • A second basic signal that many quantitative trading firms use is ‘trade impulse’. A common form is to plug trade quantity into something like the book pressure formula, but with the average bid and offer quantity in the denominator instead of the current quantity (let’s say the average is 15). So if there is a sell trade for 9 on this book, the trade impulse would be -0.25*9/15 = -0.15. This example signal would only be valid for the span of 1 packet. Another way of understanding why this is a valid predictor is that sometimes buy and sell trade quantity is autocorrelated over very short intervals, because there are often multiple orders in flight sent in reaction to the same trigger by different people (this is easily measured), so if you see one sell trade, then typically the next order will also be a sell.
  • A third common basic signal is ‘related trade’. Basically, you could just take the same signal as (2), but translate it over from a different security that is highly correlated, by multiplying it by the correlation between them.

The book pressure and trade impulse signal are enough to create a market taking strategy. After the sell trade for 9, the remaining quantity on the book is:

Sell 10 for $99.25

Sell 5 for $99.00 (best offer)

Buy (10-9 = 1) for $98.75 (best bid)

Buy 10 for $98.50

But our theoretical price is = book pressure + trade impulse = (99.00*1 + 98.75*5)/(1+5) + -0.25*9/15 = 98.79167 – 0.15 = $98.64167! Since our theoretical price is below the best bid, we will send an order to sell the last remaining quantity of 1 at $98.75, for a theoretical profit of $0.10833.

That is a high-level overview of a simple quantitative strategy, and provides a basic understanding of the flow from the input (market data) to the output (orders).

Digression: Trade signal on an FPGA

If you ran the market taking strategy from the previous section live in a real trading system, you would likely find that your orders rarely get filled. You want to trade when your theoretical price implies there’s a profitable opportunity, but other trading systems are faster than yours so their orders reach the market first and there’s nothing left for you.

State of the art latency, as of 2017, can be achieved by putting the trading logic on an FPGA. A basic trading system architecture with an FPGA is to have the FPGA connected directly to the exchange and also to the old trading system. The old trading system is now only responsible for calculating hypothetical scenarios. Instead of sending the order, it notifies the FPGA what hypothetical condition needs to be met to send the order. Using the same case as before, it could hypothetically evaluate the signal for a range of trade quantities:

  • Sell trade, quantity = 1…
  • Sell trade, quantity = 2…
  • Sell trade, quantity = 3…
  • Sell trade, quantity = 4…
  • Sell trade, quantity = 5…
  • Sell trade, quantity = 6: (99.00*4 + 98.75*5)/(4+5) + -0.25*6/15 = 98.7611
  • Sell trade, quantity = 7: (99.00*3 + 98.75*5)/(3+5) + -0.25*7/15 = 98.7271
  • Sell trade, quantity = 8…

With any sell trade of quantity 7 or more, the theoretical price would cross below the threshold of the best bid (98.75), indicating a profitable opportunity to trade, so we’d want to send an order to sell the remaining bid. With a trade quantity of 6 or less we wouldn’t want to do anything.

The FPGA is pre-programmed to know the byte layout of the exchange’s trade message, so all it has to do now is wait for the market data, and then check a few bits and send the order. This doesn’t require advanced Verilog. For example, the message from the exchange could look like the following struct:

struct Trade {

uint64_t time;

uint32_t security_id;

uint8_t side;

uint64_t price;

uint32_t quantity;

} __attribute__((packed));

Because of the relative ease of this setup, it has become a very competitive trade – some trading firms can make these types of trade decisions in less than one microsecond. Also, because the FPGA connects directly to the exchange, an additional connection must be purchased for each FPGA, which can get expensive. Unfortunately, if you only have one shared connection, and broadcast data internally with a switch, the switch might introduce too much latency to be competitive. Many companies will now pay for multiple connections which raises their costs significantly.

Digression: ‘Minimum viable trading system’

As I mentioned above, the simple 3-signal trading strategy could have made money several years ago. Even a few years ago, the ‘minimum viable trading system’ that could cover trading fees was simple enough that an individual could build a successful one. Here’s a good article by someone who created their own trading system in 2009, and could be another starting point to understand the basics of automated trading if all of this has gone over your head- http://jspauld.com/post/35126549635/how-i-made-500k-with-machine-learning-and-hft.

This guide only covers, at a high level, trading and work being done by professionals in established quantitative trading firms, so things like co-location, direct connection to the exchange without going through an API, using a high-performance language like C++ for production (never Python, R, Matlab, etc), Linux configuration (processor affinity, NUMA, etc), clock synchronization, etc are taken for granted. These are large and interesting topics which are now well understood inside and outside the industry.

Other strategies besides market micro-structure signals

Market micro-structure signal based strategies, as described above before the two digressions, are just one type of strategy. Here are some other example trading strategy algorithm components used by many major quantitative trading companies:

  • Model based
  • Rule based
    • Only buy or sell during a certain time range
    • Don’t trade against an iceberg order
    • Cancel a resting order if the queue position is worse than 50%
    • Don’t trade if the last 10 trades lost money

Supporting research infrastructure

Now that you have a brief high-level overview of the production trading system, let’s dive deeper into research. The job of a researcher is to optimize the settings of the trading system and to ensure it is behaving properly. Working for an established company, this whole software system will likely already be in place, and your job would be to make it better.

With that in mind, here are some more details about 4 other main software components I listed above that are programmed and used by the research team to optimize and analyze the trading strategy:

  1. Parameter optimization: Most major quantitative trading firms have a combination of signals, model-based pricing, and rule-based logic. Each of these also have parameters. Parameters enable you to tailor a generic strategy to make more money on a specific product, or adapt it over time. For one product, you might want to weight a certain signal higher than another, or you might want to down-weight it as the signal decays. You quickly run into the curse of dimensionality as parameter permutations multiply. One of the main jobs for a researcher is to figure out the optimal settings for everything, or to figure out automated ways of optimizing them. Some approaches include:
    • Manual selection based on intuition
    • Regression for signal weights or hedge ratios
    • Live tweaking or AB testing in production
    • Backtesting different settings and picking the best
  2. Production reconciliation: Sophisticated strategies have many internal components that need to be continually verified in live production trading. Measuring these, monitoring them, and alerting on discrepancies is how researchers make sure things are working as they expected. If the algorithm performs differently in production than it did on historical data, then it may lose money when it was supposed to be profitable.
  3. Backtesting simulator: Plenty of information is available publicly about backtesting, such as the tools available from Quantopian or TradeStation. Simulating a low latency strategy using tick data is challenging. The volume of data to simulate a single day reaches into the 100s of GBs so storing and replaying data requires carefully designed systems.
  4. Graphing: The trading strategy is a mathematical formula in a computer, so debugging it and adding new features can be difficult. Utilizing a Python or JavaScript plotting library to publish custom data and statistics can be helpful. Additionally, it is essential to understand positions and profits or losses during and after the trading day. Graphical representations of different types of data sets makes many tasks easier.

Conclusion

Most people who are new to the industry think that researchers primarily work on new signal development, and developers primarily optimize latency. Hopefully now it’s obvious that the system has so many components that those two jobs are just a few parts of a much wider set of roles and responsibilities. The most important skills for success are actually very close attention to detail, hard work, and trading intuition. On top of that it should be clear that having strong programming skills is essential. All of these systems are tailor-made in-house and have to be constantly tweaked and improved by the users themselves – you.

 

If you’re interested in joining our team at Headlands, please see our careers page and send your resume to careers@headlandstech.com

 

Note:

The information above is a collection of some helpful information to shed some light on what a quantitative trading firm does and what you could be doing if you worked at one.  The information, although intended to be helpful to you, should not be relied on and is not represented to be accurate or current. Please note this is by no means an exhaustive description of what goes on at a quantitative trading firm.  Nor should this be taken as covering industry best practices or everything you need to know to start trading quantitatively.  This is simply a very high overview of information I think those considering joining a quantitative trading firm may find useful as they navigate the interview process. 

Appendix: Latency and the timing of events

Similar to the breakdowns by Grace Hopper (https://youtu.be/ZR0ujwlvbkQ?t=45m08s) and Peter Norvig (http://norvig.com/21-days.html#answers), here’s a table of approximately how long things take:

  • Time to receive market data and send an order via an FPGA: ~300 nanoseconds
  • Time to receive market data and send an order via a ‘slow’ software trading system: ~30 microseconds
  • Minimum time between two packets from the exchange: ~10-1000 microseconds
  • Microwave between BATS and INET stock exchanges: ~100 microseconds
  • Fiber between BATS and INET stock exchanges: ~150 microseconds
  • Time for an exchange to match an order and send a response: ~100 microseconds – ~5 milliseconds
  • Microwave between NY and Chicago: ~4 milliseconds
  • Fiber between NY and Chicago: ~7 milliseconds
  • Fiber between NY and European exchanges: ~35 milliseconds

Appendix: Exchange idiosyncrasies

Exchanges almost all use different technology, some which dates back 10+ years. Different technology decisions and antiquated infrastructure have resulted in trading idiosyncrasies. There are many publicly available discussions of the effects of these idiosyncrasies. Here are a few interesting items:

https://www.eurexchange.com/blob/238346/40b5f1d684271727ef8c9c8cb9cdd09e/data/presentation_insights-into-trading-system-dynamics_en.pdf

https://www.bloomberg.com/news/articles/2017-03-17/currency-traders-race-to-reform-last-look-after-bank-scandals

https://www.nyse.com/publicdocs/nyse/markets/nyse/NYSE-Order-Type-Usage.pdf

http://www.cmegroup.com/notices/disciplinary/2016/11/NOTICE-OF-EMERGENCY-ACTION/NYMEX-16-0600-ELDORADO-TRADING-GROUP-LLC.html

https://brillianteyes.wordpress.com/2010/08/28/espeed-trading-procotol/

http://cdn.batstrading.com/resources/membership/CBOE_FUTURES_EXCHANGE_PLATFORM_CHANGE_MATRIX.pdf

http://quantitativebrokers.com/wp-content/uploads/2017/05/match-20130603.pdf

www.wsj.com/articles/SB10001424127887323798104578455032466082920

https://www.wsj.com/articles/SB10001424127887324766604578456783718395580