Hatch AI

Edinburgh
Seeking investment

Hatch-AI is a cutting-edge tech start up that leverages Artificial Intelligence, Machine Learning and Natural Language Processing to solve pressing industry challenges.

Our vision is to enable greater inclusion in the service industry by materially boosting service productivity, quality and transparency.

Hatch-AI will do this by transforming the way we meet. Cloud based, domain specific speech recognition captures complete transcripts of meetings, removing the need for note taking and allowing service providers to focus on what really matters, their customer. Recommender systems utilise ASR output to deliver live product, service and environmental prompts direct to a mobile device.

After the meeting, Hatch NLP modules summarise output and populate downstream documentation, ready to review and if necessary, amend. Ongoing service is transformed through voice enabled dialogue systems (service-bot). Personalised, scalable client interaction designed to ensure products and services provided remain relevant to customers. The service-bot is also an opportunity to provide support, education and information capture for lead generation.

Searchable, punctuated meeting transcripts are stored for the regulatory period, promoting trust and transparency. Transcripts also provide a new source of data for trend analysis across the customer base, a new insight into the needs and concerns of customers.

Introducing Hatch Adviser

Hatch-Adviser is a mobile first, intelligent assistant designed to systematically address challenges faced by the UK Financial Advice Industry. The FCA estimate that there are 12.8 million people in the UK that have a need, yet do not receive financial advice. Challenges:

  • Adviser Productivity: its estimated that up to 40% of adviser time is spent on non-value add activities such as manual process & administration;
  • Accessibility: The are approximately 27k registered financial advisers in the UK, serving a potential customer base of 16 million people;
  • Affordability: 46% of UK adults would be willing to pay for regulated financial advice if the costs were ‘reasonable’ (FCA).
  • Transparency, Trust & Quality: The low trust environment post financial crisis continues, with only 39% of people trusting advisers to act in the best interest of clients.
  • Complexity: Nomenclature, regulation, product complexity and the breadth of potential outcomes all serve as a barrier to consumers.

 

 

 

Automated Speech Recognition (ASR)

Our team are world leaders in computational linguistics and automated speech recognition.

Cloud based ASR, utilising deep learning to conversational models are trained on domain-specific data for the rich transcription of face to-face-meetings. The automatic speech-to-text process also segments the audio stream, identifies different talkers, and structures and punctuates the output.

NLP Automation

Information extraction modules automatically extract structured knowledge. Named entity detection, topic detection, clustering, relation extraction, and summarization are tools used to efficiently process transcriptions, populate customer records and generate documentation (Productivity, automation).

AI Recommender

Live prompts and product recommendations are selected based on the customers’ financial position, goals, risk appetite, precedent, and the external environment. The solution  combines both rule-based and machine learning components (Complexity, Quality).

Service-bot

Dialogue system architecture features at its core a rule-based system aided by a machine learning component. Our hybrid data driven methodology addresses both coverage limitations of a strictly rule-based approach and the lack of guarantees of a strictly machine learning based approach.

We’re a team of world leading computer scientists and engineers in artificial intelligence, machine learning & natural language processing. We’re industry experts, Professors and senior researchers at the University of Edinburgh. We delivering world class research in automated speech recognition, neural machine translation, natural language processing and machine learning.

Our Origins

With €10m investment and support from a formidable array of partners, we led an EU H2020 project to build a state of the art, integrated Natural Language Processing platform called SUMMA. Created for our principle partner the BBC, SUMMA seamlessly aligns AI and NLP modules within, versatile micro-service architecture. Designed to ingest up to 400 live news streams simultaneously, the platform is big data enabled, creating a powerful tool for the aggregation, structuring and analysis of data.

Automated Speech Recognition and Machine Translation modules enable the capture and transcription video and audio across 9 language pairs, alongside social media and web-scraping tools. Data structuring is driven through a series of NLP modules, including; named entity linking, clustering, topic detection and knowledge base construction. Structured data is then analysed for patterns and nuance through sentiment analysis before outputs are prepared through automated summarisation technology.

Our Evolution

We’re spreading our wings. We’ve lined up the brilliant minds behind the SUMMA platform, we’re adding leading data scientists, engineers and experts from the Investments and Advice industries. This exceptional team is now Hatch-AI.

Partners

We have partnered with Big 4 Consultancy firm Ernst and Young to take Hatch Adviser to market. EY are supporting Hatch-AI through initial stages of strategic partnership identification and product development. We have a number of partnership discussions ongoing with a FTSE 100 and Fortune 500 companies.

Joseph Twigg – CEO

Joseph has 15 years experience in the Investment industry, most recently serving as the Global Head of Strategy and Business Management for the UK’s largest asset manager, Standard Life Aberdeen. Joseph was responsible for the global business development strategy, leading major international projects in North America, the UK and Europe. Joseph has trained in accountancy and holds professional qualifications in strategy from Stanford University and an Executive MBA from the University of Edinburgh.

Dr Lexi Birch – CTO/NLP

Dr Lexi Birch is currently a Research Fellow at Informatics at the University of Edinburgh. Lexi is an expert in neural machine translation (NMT) where she has been leading research addressing problems with NMT’s limited vocabulary size, developing techniques to train on monolingual data, and examining the value of deeper models. This research has pushed the state-of-the-art in machine translation, She has been awarded an Innovation Fellowship from EPSRC (2018-2021)

Prof. Steve Renals – Chief Scientific Adviser

Steve Renals is the Head of Computational Linguistics and Professor of speech technology at the University of Edinburgh. He has made significant contributions to speech technology, with over 250 publications in the area (h-index 51), and he and his students have won several best paper awards for their work in speech recognition in the past few years. He has led several large projects in the field, including the EPSRC Programme Grant Natural Speech Technology and the large EU projects SUMMA, AMI, and AMIDA.

Dr Shay Cohen – Machine Learning

Dr Shay Cohen is a lecturer at the School of Informatics at the University of Edinburgh. He was awarded a Chancellor’s Fellowship at the University of Edinburgh (2013-2018). Dr Cohen’s research focuses on natural language processing and machine learning. A major aim of his work is to develop basic building blocks necessary for natural language applications. These are algorithms and approaches which are fundamental to the analysis of natural language. Dr Cohen’s work has been covered in media outlets such as BBC (https://tinyurl.com/ybsy84bo) and the New Scientist.

Dr Federico Fancellu – Chatbot & Dialogue

Federico is currently a post-doctoral research associate at the University of Edinburgh investigating multilingual natural language understanding. He recently completed a Ph.D. in Informatics at the University of Edinburgh with a thesis on computational models for multilingual negation detection. During his Ph.D., he was involved in a year long project funded by Amazon where he lead a team of 8 Ph.D. students to build an open-domain conversational agent from scratch, to improve the quality of the current Alexa system.

Dr Yolanda Vazquez – Human Computer Interaction

Yolanda holds an MSc in Speech and Hearing Sciences with Distinction from University College London (UCL) and a PhD in Human-Computer Interaction from the University of Glasgow specialising in auditory interfaces. She has over 15 years’ of commercial and academic experience working with audio, voice and interactive systems that includes unique experience in speech technology, experimental phonetics, social psychology and human factors. Yolanda was a Research Fellow at the School of Computing Science in the University of Glasgow, funded by the Scottish Informatics & Computer Science Alliance (SICSA).

Dr Barry Haddow – Information Extraction

Barry Haddow is a senior researcher in the School of Informatics at the University of Edinburgh. Barry has work on  both information extraction and machine translation, and he now focuses on neural machine translation (NMT). He recently coordinated the HimL (Health in my Language) project, a €3M EU project which showed how state-of-the-art NMT could be applied to the health-care domain. His current research projects aim at web-scale extraction of training data for NMT, and cross-lingual information retrieval and summarisation for under-resource languages. He has authored more than 60 publications in NLP, and is the lead organiser for the annual WMT Conference in Machine Translation.

Overview

  • Funding Stage Self-funded
  • Trading for <1 year
  • Employees 6-10
  • Sector Personal Finance
  • Valuation N/A

Get in touch