AMD's Latest FPGA Promises Super Low Latency AI For Flash Boy Traders
AMD has refreshed its Alveo field-programmable gate arrays (FPGAs), promising a sevenfold improvement in operating latency and the ability to run more complex machine learning algorithms on the customisable silicon.
FPGAs are often used by high-frequency traders, an industry in which a delay of a few fractions of a second can be the difference between profit or loss on algorithm-arranged trades. The ability to reprogram FPGAs with faster or more refined trading algorithms that speed transactions makes the machines valuable. Faster and more flexible FPGAs have obvious appeal.
AMD's Alveo UL3524 FPGAs claim reduced latency and support for AI inferencing via the FINN framework to accelerate high-frequency trading ... Click to enlarge
AMD's Alveo UL3524 is the company's latest FPGA developed for this market. The card is based on Xilinx's 16nm Virtex UltraScale+ FPGA and features 64 transceivers, 780 thousand lookup tables, and 1,680 digital signal processing (DSP) slices on which customers can deploy their algorithms. It's these transceivers, which AMD says were "purpose built" for low latency trading, that are responsible for driving latency down 7x compared to the previous generation at less than 3ns.
However, at the end of the day, these FPGAs are really just a vessel for the proprietary software responsible for triggering trades when certain market conditions are met.
As with previous-gen FPGAs, AMD provides software support by way of its Vivado Design Suite, which includes various reference designs and benchmarks to help customers develop new applications for the platform. But for those looking to employ AI/ML in their trading algorithms to eke out an advantage, the card also supports the open-source FINN development framework.
- Cloudflare loosens AI from the network edge using GPU-accelerated Workers
- Mention AI in earnings calls ... and watch that share price leap
- Intel spices up its FPGA game with open source and RISC-V freebies
- AMD says its FPGA is ready to emulate your biggest chips
The FINN project explores deep neural network inference on FPGAs. According to the project’s site, the framework has proven effective at classifying images at sub microsecond latencies.
AMD’s new toys for traders are fast and powerful. However, using machine learning to guide stock purchases isn't always a sure bet. In a paper published early in 2022, a group of researchers at three universities and IBM demonstrated how share-trading bots could be manipulated with something as simple as a single re-tweet.
At launch AMD's UL3524 is available from a number of OEMs specializing in infrastructure for the financial sector, including Alpha Data, Exegy, and Hypertech. ®
From Chip War To Cloud War: The Next Frontier In Global Tech Competition
The global chip war, characterized by intense competition among nations and corporations for supremacy in semiconductor ... Read more
The High Stakes Of Tech Regulation: Security Risks And Market Dynamics
The influence of tech giants in the global economy continues to grow, raising crucial questions about how to balance sec... Read more
The Tyranny Of Instagram Interiors: Why It's Time To Break Free From Algorithm-Driven Aesthetics
Instagram has become a dominant force in shaping interior design trends, offering a seemingly endless stream of inspirat... Read more
The Data Crunch In AI: Strategies For Sustainability
Exploring solutions to the imminent exhaustion of internet data for AI training.As the artificial intelligence (AI) indu... Read more
Google Abandons Four-Year Effort To Remove Cookies From Chrome Browser
After four years of dedicated effort, Google has decided to abandon its plan to remove third-party cookies from its Chro... Read more
LinkedIn Embraces AI And Gamification To Drive User Engagement And Revenue
In an effort to tackle slowing revenue growth and enhance user engagement, LinkedIn is turning to artificial intelligenc... Read more