Artificial Intelligence is deemed to be the main driver of the 4th Industrial Revolution. IDC predicts that investment in AI will grow from $12bn in 2017 to $57.6 by 2021, while Deloitte Global predicts the number of machine learning pilots and implementations will double in 2018 compared to 2017. As a result, companies from every industry have been spurred on to seize the trend and innovate – from virtual assistants to cyber security to fraud detection and much more. The majority of C-level executives have identified and agree that AI will have an impact on their industry. However, only 20% of C-level executives admit they have already adopted AI technology in their businesses, according to research conducted by McKinsey. So, there is plenty of scope for change and improvement. The Finance industry is anticipated to lead the way in adoption of AI with a significant projected increase in spending over the next three years.
Until recently, practitioners have faithfully relied upon neo-classical models to measure performance, whether it’s in financial organisations or marketing corporations. AI is the new technology that offers an automated solution to these processes. It has the capability to replicate cognitive decisions made by humans and also remove behavioural bias adherent to humans.
Machine learning and sentiment analysis are specific techniques that are applied in AI. These techniques are maturing and rapidly changing the landscape of FINTECH. In order to process and understand the masses of data out there, machine learning and sentiment analysis have become essential methods that open the gateway to data analytics. To keep up with the ever-expanding datasets, it is only natural that the techniques and methods with which to analyse them must also improve and update.
This conference will help you to demystify the buzz around AI and differentiate the reality from the hype. Learn about how you can benefit from the unprecedented progress in AI technologies. Participants will be presented with real insights on how they can exploit these technological advances for themselves and their companies.
Topics Covered Include:
We are inviting speakers – thought leaders, subject experts and start up entrepreneurs – to share their knowledge and enthusiasm about their work and their vision in the field of AI, Machine Learning, Sentiment Analysis and Deep Learning.
We understand that successful projects are written up as “White Papers”. Please share these with us. But projects that did not achieve their targets – “Black Papers” – are of interest to us too. They can be very important topics of discussion / panels that you can present. Talk to us about both, we welcome your input.
Please complete the speaker’s response form and submit a proposal to present at this event.
James Luke, Distinguished Engineer, Public Sector, IBM
James has been delivering Artificial Intelligence solutions that solve real problems for over 25 years. In this presentation, the presenter will dig through the hype and use real examples to explain what it takes to deliver working AI solutions.
Peter Hafez, Head of Data Science, RavenPack
In order to maintain an edge in the marketplace, asset managers are to a large extent turning to unstructured content for alpha creation, using NLP and text analysis techniques. In addition, more and more managers are expanding their mandate, trading global portfolios, to ensure more scalable strategies. As part of his presentation, Peter will showcase how news sentiment can be a valuable input to such process.
Nishant Chandra, Data Science Leader, AIG Science
Deep learning has created a revolution in the natural language processing domain and corporations are leveraging it in various ways. The technology barrier is significantly reduced with open source technologies that are easy to configure and use. Several generic open source tools are available in machine learning, including deep learning, which can be customized for natural language processing. This presentation will help the audience to go beyond generic NLP problem solving by leveraging deep learning, customizing it for their industry. Specifically, they’ll learn that:
♦ Sentiment doesn’t have to be positive, negative or neutral but it can be extracted from the conversation
♦ Summarization doesn’t have to be entire document but only certain context
♦ Text classification doesn’t have to be exactly text/phrase/spelling based but can also include variation of acronym and synonym
♦ NLP can be applied broadly, and complex use cases can be built through intelligent iteration on simple examples.
Xiang Yu, Chief Business Development Officer, and Gautam Mitra, CEO/Director, OptiRisk Systems/UCL
We compute daily trade schedules using a time series of historical equity price data and applying the powerful mathematical concept of Stochastic Dominance. In contrast to classical mean-variance method this approach improves the tail risk as well as the upside of the return. In our recent research we have introduced and combined market sentiment indicators and technical indicators to construct enhanced RSI and momentum filters. These filters restrict the choice of asset universe for trading. Consistent performance improvement achieved in back-testing vindicates our approach.
Richard Peterson, CEO, MarketPsych Data
In this talk Dr. Richard Peterson describes how media analytics are providing new insights into the origins and topping process of asset price bubbles. Examples from price bubbles including the China Composite, cryptocurrencies, housing, and many others will be explored. Recent mathematical models of bubble price action will be augmented with sentiment analysis. Attendees will leave with new models for identifying and taking advantage of speculative manias and panics.
Pierce Crosby, Director of Business Development and Revenue Strategy for StockTwits
StockTwits is the largest independent social network setup for investors and traders to talk about investing. In addition to covering 8,300 stocks per year, the network also discusses 1,500+ alternative assets, including FX, futures, fixed income, privative companies, ETFs/indexes, and cryptoassets. With a dataset that stretches back to 2009, the network becomes a rich dataset for both quantitative investing as well as model development. In this talk, we will discuss the methodology behind developing an NLP-based social signal, as well as some of the academic studies run in parallel with this research. We will also discuss some of the ways in which it is being deployed in markets today.
Jakub Kolodziej, Quantitative Research Senior Associate Analyst, Macquarie Research
Macquarie analyse a large dataset of email receipts that covers the purchases of more than two million US customers. The data, sourced from QUANDL, contains weekly information on all the items purchased by each individual consumer from a large set of companies including Amazon, Walmart and Apple. In particular, for each product Macquarie gives a description, its likely classification in terms of broad goods categories, price paid, number of units, shipping costs, any discounts received and many more fields. Consumers opt in to share information available from their email accounts with a data vendor. The data is anonymised but each consumer is assigned a unique identifier which allows them to follow individual purchase histories over time and infer a profile.Using Amazon.com as a case study, they show that the data can generate real-time forecasts of quarterly sales that are at least as accurate as consensus. It is, however, in combining analyst insights and big data that they find the most significant improvement in predictive power. They also highlight the possibilities opened by this kind of large-scale database for a truly quantamental approach to equity valuation. Finally, they describe the technological solutions adopted to overcome the challenges posed by a dataset that can reach hundreds of millions of rows for a single firm.
Lyndsey Pereira-Brereton, Data Visualization Editor, Bank of England
With the explosion in the amount of data and the burden of information overload, how can we get the most out of our data and communicate this effectively? In my talk I will show how the Bank of England is using data visualisation to see through the data fog and better communicate our findings.
Moderator: Gautam Mitra; Panellists:
Guillaume Vidal, CEO, Walnut Algorithms
♦ How machine learning fits into systematic strategies
♦ What are the pros and cons of using machine learning in quant finance
♦ Building an infrastructure that enables machine learning research within the company
♦ Debunking the myth of superintelligence in finance
Asger Lunde, Director, Copenhagen Economics and Professor of Economics, Aarhus University
This paper studies forecasting of Chinese macroeconomic time series using a large number of prediction variables. We investigate what is the extent of improvement of forecasts when news sentiment indexes are included among the predictors. Due to large number of predictors we summarize them with a smaller subset of indexes that are build with principal component analysis. An approximate dynamic factor model is then fit on these indexes and used for 3-, 6- and 12-month-ahead forecasts for 4 Chinese macroeconomic time series (Balance of Payments, Exchange rate with US dollar, GDP and Unemployment rate). In total we use 132 predictors from various sources ranging from 2000 through 2017. The results suggest that forecasts obtained with this method outperform univariate autoregressions and in shorter prediction horizon news indexes improve the forecasts.
Francesco Cricchio, CEO, Brain and Matteo Campellone, Executive Chairman, Brain
Brain has developed a set of models based on machine learning methods to statistically classify assets that are more likely to have a positive/negative return over the following time period. Input data can be conventional series (fundamentals, financial time series) or non conventional series such as, for instance, sentiment indicators or signals coming from other proprietary models. This approach can be used for multi-stock trading strategies as well as for tactical asset allocation models.
Sanjiv Das, Professor of Finance and Data Science, Santa Clara University
In This talk we define and characterize the business of FinTech by identifying 10 salient areas of influence. We then analyse one area, namely AI, and examine how it is changing the landscape of finance through FinTech applications.
♦ What is FinTech?
♦ Example of AI in FinTech.
♦ Predicting markets with AI.
♦ The transformation of data use with AI.
♦ The future of labor markets in the finance industry
Christina Erlwein-Sayer, Senior Quantitative Analyst and Researcher, OptiRisk Systems
Sovereign bond spreads are modelled taking into account macroeconomic news sentiment. We investigate sovereign bonds spreads of European countries and enhance the prediction of spread changes by including news sentiment. We conduct a correlation and rolling correlation analysis between sovereign bond spreads and accumulated sentiment series and analyse changing correlation patterns over time. These findings are utilised to monitor sovereign bonds, predict spread changes in an ARIMAX model and highlight changing risks. The results are integrated in the SENRISK tool, a DSS for Bond Risk Assessment.
Ivailo Dimov, Quant Researcher, Bloomberg
Stories on the Bloomberg newsfeed are tagged with "topic codes" containing information about their origin, subject matter, or other characteristics. One might expect that sentiment analysis of news stories may be enhanced by taking into account these topic codes, but the sheer number of topic codes is an obstacle to doing so systematically.
In this talk, we present evidence that some groups of topic codes are indeed associated with stronger sentiment impact on stock prices than others, and discuss a method to condense the mass of topic codes by identifying and retrieving latent factors which may be interpreted as broad themes shared by groups of topic codes.
Anders Bally, CEO and Founder, Sentifi
This presentation is about how new AI methodologies like Deep Learning, the maturing Big Data Technologies and the fast emerging Information Sharing Culture can help investors to more efficiently discover, monitor and potentially predict Asset Valuation Drivers.
Andreas Zagos, Intracom GmbH
Intangible assets cover up to 84% of the company value in tech companies. The question is how to measure the intangible assets, namely patents and utiliy models. Intracom will present their indicator based approach on pattern recognition on big data for determining monetary values of patent portfolios - the "IP value factor". The monetary value was used for backtests on different indexes and the results of those tests will be presented. The "IP value factor" is uncorrelated and generates alpha in sector neutral backtests.
Jordan Mizrahi, CEO and Founder, FIRST TO INVEST
Combining news sentiment approach and visual interactive displays, help end-users rapidly sort through volumes of companies’ news events to identify key insights faster and in easy to use, for any level in the organization. The visual analytics provide access to broader view on companies’ sentiment scoring, not just on a singular event, but rather of time-line perceptive, comparison to the competitors, sectoral and industry scoring and even markets differentiation.
Moderator: Gautam Mitra; Panellists:
Stan Maer, COO, Cryptics.tech
Ronald Hochreiter, Docent & CEO, WU Vienna University of Economics and Business & Academy of Data Science in Finance
AI and Machine Learning methods can be used to generate investment decisions successfully. A clever combination of Data Science methods with methods from the field of Decision Science (Prescriptive Analytics) may lead to even more successful models. In this talk a general outline for such a successful methodological combination will be presented as well as a concrete novel Deep Learning investment model which is based on graphical TTR series representations instead of using time-series directly. It will be shown how important Feature Engineering for Deep Learning in Finance actually is.
Christopher Kantos, Senior Equity Risk Analyst, Northfield
In December of 2017 Northfield introduced the first commercially available factor risk models that incorporates computerized analysis of news text directly into volatility risk forecasts for individual stocks, corporate bonds, industry groups and ETFs based on market indices. Market events in early 2018 provided several excellent examples of why we believe that Risk Systems That Read® is the most significant innovation in factor risk models in more than three decades. We will illustrate show how recent news events drove financial market outcomes for Wynn Resorts, Wynn Macau, Facebook and Wanda Hotels (HK). Each day the content of thousands of news articles are now part of the input for the full range of models available from Northfield. The line of research that led to this innovation stretches back to 1997, and includes five published papers by Northfield staff [diBartolomeo and Warrick (2005), diBartolomeo, Mitra, Mitra (2009), diBartolomeo (2011,2013,2016)]. Beyond the obvious improvement in risk estimation, the method has important implications for alpha generation by both quant and traditional for active managers.
Dan Joldzic, CFA, FRM, CEO of Alexandria Technology
Local source, native publishers may offer an information advantage compared to publications in English. Translation services have typically been sub-optimal for character-based languages, but machine learning allows for classification in the native form, which can lead to significant alpha in forward periods.
Claus Huber, Founder and Managing Director, Rodex Risk Advisers
This article describes the application of Kohonen’s Self-Organising Maps (SOM), a method of Machine Learning, to the problem of selecting hedge funds to achieve stable portfolio performance. SOM can help to identify similarities in return structures of hedge fund managers and hence to avoid concentrations in a portfolio. The core question is if SOM can add any value for manager selection. 2 novel yet simple methods to select hedge funds based on the specific properties of SOM are proposed that both target to identify unique investment strategies. To evaluate their performance relative to other, simpler benchmark methods of portfolio selection, a simulation study finds both SOM-based methods proposed enhance risk/return profiles and drawdown patterns.