New Features and Performance Improvements for
The World’s First AI Investing Assistant
New Features and Performance Improvements for The World’s First AI Investing Assistant
Technology and Platforms:
Python 3, Django, Celery, Pandas, GraphQL, FastAPI, SQLAlchemy, AWS (ElasticSearch, CloudWatch), Sentry, DataDog, GitLab-CI, AWS (DynamoDB, EKS, S3 Bucket), k8s, Docker, PostgreSQL, Redis, Kafka
Magnifi is a subscription-based app that uses AI to help users make investment decisions. It provides users with a conversational AI investing assistant, investment search engine, on-demand data, interactive planning tools, and a commission-free brokerage. Magnifi guides users to understand their investing personality, makes a plan for their goals, and finds investments that help them meet those goals.
The first requirement we pursued was a Comma Separated Values (CSV) file upload for users’ portfolio data. Users could provide information on the stocks and investments they held (e.g., company name, ticker symbol, number of shares, purchase price, etc.). The CSV upload feature was our primary focus before the discovery phase began.
During the discovery process, we distilled this initial focus into Magnifi’s underlying needs. These needs centered around augmenting AI algorithms with more data, enhancing the quality of insights we could deliver to customers. With richer and more useful insights, customers could make better investing decisions.
Magnifi uses conversational AI to help its users make investment decisions. In order to develop and enhance the machine learning models underlying their AI, we needed to have a larger set of training data to work with. After a series of discussions during Discovery, we collectively determined that the solution was the creation of a service to integrate with the Plaid Investment API.
We collaborated on four product-related challenges.
Portfolio Integration Service
We delivered a service to ingest end users’ portfolio data. This data would be used by Magnifi’s AI engine to provide more effective investing insights. Magnifi was previously using a limited set of data to provide insights, which did not allow for more accurate or more detailed insights.
We identified that the legacy Notification service was not going to support the needs of users and needed to be rewritten. We created a new Notification service that removes the dependency on a legacy solution that used overnight batch processes. The new service would be more scalable by processing on-demand notifications.
We needed to stabilize the performance of the Magnifi platform in order to grow and scale the business. We needed a stable architecture and improved performance. By addressing stability and performance, we can now build and grow their products and services Magnifi offers to customers.
Conversational AI Chatbot API Optimization
Chatbot users experienced slow response times and significant error rates under even small loads. We needed to observe the system under various controlled load scenarios to identify the root cause of performance issues.
“What I love about the Forte team is that they do more than deliver on requirements. They look at the big picture and consider our broad-based business objectives. They then propose solutions with improvements or optimizations over what the original requirements stated. As a result, they deliver the best possible outcomes.”
Anis Ben Brahim
VP of Product at Magnifi
Forte’s Senior Product Owner led a three week discovery, where we worked together as one squad to fully understand the desired business outcomes, success metrics, the current and target technical architecture and other key items. The team spent time upfront learning:
The team evaluated the different ways to achieve the desired result and decided to deliver the Plaid API first. The result? Working software that enabled a dramatic increase in user adoption and usage. This release led to positive market exposure, including coverage by Yahoo! Finance, Enterprise Talk and AIThority.
All of this in roughly one-third the time it would have taken to build a custom solution.
Magnifi uses AI to help users manage their personal investment portfolios. Users are able to learn more about their investments and make informed decisions about future investments. The problem was that the limited amount of data that we had on each person’s personal portfolio made the AI-generated insights less meaningful and useful.
We needed to tie users more closely with their portfolio details. Users would be able to import their portfolio, which would be fed into Magnifi’s machine learning models. This allows for the product to extend more of its functionality to the user as the portfolio data enables Magnifi’s conversational chat to provide recommendations or notify users on portfolio changes.
These could be direct communications or portfolio-related advice, such as identifying the most profitable investments in the last quarter or what to invest in over the next six months. There are also recommendations that you can either subscribe to or have appear automatically.
This integration allows users to import their portfolios from other platforms into Magnifi. To accomplish the task and meet all requirements, we designed and implemented a dedicated microservice named User Aggregation Service. The service was integrated with the Plaid API to allow the platform to receive portfolio details for users. The microservice also kept portfolio details in NoSQL data storage. These portfolio details were fed into the Magnifi platform, while staying in sync with users’ accounts.
We implemented two approaches to portfolio gathering. The first is a manual triggering for data synchronization and the second uses webhooks to communicate between Magnifi and the Plaid API.
Notifications were the next target that was prioritized. Magnifi had a legacy notification service that used batch processes to send notifications to all users. This service began to fail and have performance issues as the number of users grew. The notification service needed to be modernized.
We need to send notifications internally and externally on time, but the process of sending them can be compute-intensive. The solution needed to be scalable and cost-effective and handle a large number of users.
We created a dedicated microservice that handles triggers from the platform. For each notification trigger, we created a second microservice as a containerized service under Kubernetes orchestration.
The microservice has horizontal and vertical scaling when required. All services were under the same cluster to not spend extra compute on data transfers. We used Apache Kafka to manage the Notifications queue, as well as the asynchronous processing of each request.
We needed to improve our personalization engine, including how the engine targeted users. The prior process didn’t allow for personalized recommendations based on a user’s preferences or subscriptions. With the new architecture we would enable the flow for personal modifications to recommendations, especially considering that these recommendations can be quite extensive.
The recommendations provided through this new service needed significant modification and it was a complex process converting from batch to individual. We needed the recommendations to be generated quickly, because timely suggestions are more useful to customers than delayed suggestions.
We analyzed the logic and mechanism for sending note modifications and decided to move it into a separate microservice. We built the microservice from scratch, considering how it potentially worked before but taking into account all of the existing issues.
The new microservice serves as a tool for sending messages. We receive messages as input, specifying whom they’re sent to, when to send them, and how to send them. There are three possible ways of sending:
1. Internal (i.e., within the application)
3. Through the new microservice directly via queues and processes
This ensures the proper delivery of messages to the client.
We designed, developed and executed load test scenarios to allow the system to be observed in a controlled environment. We analyzed the resulting data to identify the root cause of the observed issues. Our analysis led to several improvements including replacing the API framework and optimizing the resources allocated to chat input parsing services.
According to Vasundhara Chetluru, Co-founder and CPO, Magnifi, “The Forte team delivered features and enhancements essential to the growth of our business: the import of users’ portfolio data, a more scalable notifications system, more personalized recommendations and more. I highly recommend the Forte team.”
“The Forte team delivered features and enhancements essential to the growth of our business: the import of users’ portfolio data, a more scalable notifications system, more personalized recommendations and more. I highly recommend the Forte team.”
Head of Conversational AI at Magnifi.
Co-Founder & Chief Product Officer at Tifin.ai