True ML talks #18 - Generative AI discussion with Tushar Kant

Generative AI discussion with Tushar Kant
Generative AI discussion with Tushar Kant

We are back with another episode of True ML Talks. In this, we dive deep into ML Platform and we are speaking with Tushar

Tushar is a seasoned MLOps leader with 20+ years of experience at top tech companies and a wide range of skills in product, business, engineering, and investment banking. He is also the co-founder of the worldwide IIT, artificial intelligence, and machine learning forum, and runs a very active Slack community for that as well.

Watch the full episode below:

IIT Artificial Intelligence and Machine Learning Forum

Forum's Vision

IIT AI/ML Forum was started with the vision of creating a community where IITians working in AI/ML could share their knowledge, collaborate, and help each other. They believed that by working together, IITians could surpass any other engineering institute in the world.

The forum has been a huge success, with over 1800 members from all over the world. The forum has organized events, supported other organizations, and grown into a thriving community in its own right.

Forum's Accomplishment

Tushar is particularly proud of three things that the forum has accomplished:

  1. Organizing the one-day track for AI at Icon.:This is a major conference in Silicon Valley, and the fact that the forum was able to organize the AI track is a testament to its reputation and influence.
  2. Building strong connections among members: The forum has helped to create lifelong friendships and business partnerships among its members.
  3. Providing a support system during the COVID-19 pandemic: When the world shut down, the forum continued to meet every other week, providing a source of knowledge and support for its members.

Pivotal Moments in the Growth of AI and MLOps

The combination of the Cloud computing, Transformers, Pre-training will be a major driver of innovation in AI and MLOps in the coming years. Particularly the potential of multimodal AI, which combines natural language processing and computer vision to solve complex problems.

Cloud computing has made AI more accessible and affordable for everyone. This has led to a surge of innovation in the field, as startups and individuals are now able to develop and deploy AI applications without having to invest in expensive infrastructure.

Transformers have revolutionized natural language processing and computer vision. Transformers are a type of neural network architecture that is able to learn long-range dependencies in data. This makes them well-suited for tasks such as machine translation and image recognition.

Pre-training is a technique where a large language model is trained on a massive dataset of text and code. This pre-trained model can then be fine-tuned for specific tasks, such as translation or question answering. Pre-training has significantly improved the performance of AI models on a wide range of tasks.

ChatGPT and Generative AI: Potential Applications Across Industries

ChatGPT and generative AI have the potential to revolutionize many industries. He is particularly interested in the potential of these technologies to improve customer service, reduce fraud, personalize products and services, and improve healthcare.

Examples of specific applications of ChatGPT and generative AI in different industries:

  1. Customer service and experience: ChatGPT and generative AI can be used to automate customer service tasks, such as responding to queries and generating reports. This can free up customer service representatives to focus on more complex tasks.
  2. Risk assessment and fraud detection: ChatGPT and generative AI can be used to identify and mitigate risks in the banking and finance industry. For example, they can be used to detect fraudulent transactions and assess the risk of borrowers.
  3. Personalization: ChatGPT and generative AI can be used to personalize products and services for customers in the retail industry. For example, they can be used to recommend products to customers based on their past purchases and browsing history.
  4. Premium determination and risk assessment: ChatGPT and generative AI can be used to determine insurance premiums and assess the risk of policyholders in the insurance industry.
  5. Patient advocacy and disease diagnosis: ChatGPT and generative AI can be used to develop patient advocacy tools and diagnose diseases more quickly and accurately in the healthcare industry.


LLMs and Risk Assessment

LLMs are still in their early stages of development, but they have the potential to revolutionize risk assessment in the financial services industry:

LLMs can process more data, faster. Risk assessment models traditionally rely on a limited amount of data, such as credit scores and income. LLMs can process much more data, such as spending patterns, buying behavior, and online behavior. This allows them to create more accurate risk assessments.

LLMs can consider associative factors. In addition to individual factors, such as credit score, LLMs can also consider associative factors, such as the company a person works for and the industry they work in. This can help them to create more comprehensive risk assessments.

Future of LLMs

Types of players in the ecosystem

He believes that there will be three types of players in the ecosystem:

  1. Foundation model builders: Companies like OpenAI, Google, and Meta that develop the large language models themselves.
  2. LLM Ops platforms: Companies like AWS and Google that provide platforms for developers to build and deploy LLM applications.
  3. LLM distributors: Companies that develop and sell LLM-powered products and services to end users.
📌
Electical Power Industry:
In the electrical power industry, there are generators, transmission lines, and distributors. In the LLM industry, Tushar sees foundation model builders as generators, cloud computing providers as transmission lines, and startup companies as distributors.

Closed vs. open source:

There will be a space for both closed and open source LLMs.
Closed source models will be preferred by large enterprises that need production-ready solutions with support. Open source models will be preferred by smaller companies and researchers who need more flexibility and customization.

Middleware's Role:

There will be a need for middleware to help developers use LLMs more easily and efficiently. Middleware can provide features such as model management, fine-tuning, and monitoring.

Benefits and risks of LLMs:

It is imperative to view LLMs as tools that can either amplify human capabilities or pose risks, depending on their application. Like any tool, the use of LLMs is shaped by human choices and intentions. They hold the potential to Advance medical treatments, Foster innovative educational programs, Automate tasks currently performed by humans. However, they can also Generate deepfakes, Spread misinformation, Manipulate individuals.

Human role in the development and use of LLMs:

Even as LLMs grow in sophistication, they will always fall short of fully grasping the nuances of human values. Consequently, humans retain a pivotal role in ensuring that LLMs align with our values. This includes, Establishing ethical guidelines for LLM development and usage, Educating the public about LLM benefits and risks, and Recognizing that humans possess the unique capacity to think creatively and find innovative solutions, while LLMs are constrained by their training data.

Building Generic RAG Systems: AWS vs. Startups

When it comes to constructing generic RAG systems, AWS and startups each bring their own distinct advantages and challenges to the table.

AWS Strengths: AWS is well-placed to develop generic RAG systems due to its substantial customer base and a wide range of services that can support RAG. For example, AWS offers SageMaker, a machine learning platform for training and deploying RAG models. Additionally, AWS provides various data storage and processing services ideal for RAG workflows.

AWS Weaknesses: AWS might not match the agility of startups in terms of swiftly developing and launching new products. Furthermore, AWS's focus may not be as specific as startups, especially in use cases like RAG for healthcare.

Startup Advantages: Startups excel in agility, allowing them to focus on specific use cases and rapidly innovate in the RAG domain. Their niche focus can lead to unique RAG solutions and innovations often overlooked by larger entities.

Startup Challenges: Startups often grapple with resource constraints, lacking the extensive customer base and service portfolio of AWS. Competing with AWS on price can be daunting due to the scale and resources of the tech giant.

Advice for startups that are developing RAG systems:

  1. Focus on specific use cases: Startups should focus on developing RAG solutions for specific use cases. This will help them to differentiate themselves from AWS and other large companies.
  2. Move quickly: Startups need to move quickly to develop and launch their RAG solutions. This is because AWS and other large companies can easily copy their products.
  3. Be an attractive M&A candidate: Startups should focus on developing RAG solutions that are attractive to M&A candidates.This will give them a way to exit their business if they are unable to compete with AWS and other large companies.

Advice for Leaders

  1. Be nimble and agile. The field of generative AI is constantly evolving, so it is important to have a mindset and a team that is able to quickly adapt to new developments.
  2. Focus on solving real problems. Don't get caught up in the hype of generative AI. Instead, focus on identifying real business challenges that can be solved with this technology.
  3. Don't be afraid to be late. It's okay if someone else beats you to market with a new generative AI solution. The important thing is to learn from their mistakes and build a better product.
  4. Don't force it. Not every problem needs a generative AI solution. Use your business acumen to identify the right problems to solve with this technology.

Advice for data science and engineering leaders

  1. Don't start with the hammer. Don't just look for ways to use generative AI. Instead, start by identifying your business challenges and then see if generative AI is the right tool to solve them.
  2. Work backwards from the customer. What are the customer's needs? What are their pain points? Once you understand the customer, you can start to think about how generative AI can be used to help them.
  3. Don't give in to top-down mandates. If your leadership team is mandating that every team come up with generative AI use cases, don't just go through the motions. Push back and ask why they think generative AI is the right solution for those problems.

Read our previous blogs in the True ML Talks series:

True ML Talks #17 - ML Platforms @ Slack, LLMs and SlackGPT
In this blog, we dive deep into Slack’s Recommend API. Understand their ML architecture, and the LLM use cases in Slack. We go into SlackGPT.

Keep watching the TrueML youtube series and reading the TrueML blog series.


TrueFoundry is a ML Deployment PaaS over Kubernetes to speed up developer workflows while allowing them full flexibility in testing and deploying models while ensuring full security and control for the Infra team. Through our platform, we enable Machine learning Teams to deploy and monitor models in 15 minutes with 100% reliability, scalability, and the ability to roll back in seconds - allowing them to save cost and release Models to production faster, enabling real business value realisation.