
<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Mauverick &#187; ABHIJIT BANGALORE</title>
	<atom:link href="https://mauverick.com/category/abhijit-bangalore/feed/" rel="self" type="application/rss+xml" />
	<link>https://mauverick.com</link>
	<description></description>
	<lastBuildDate>Fri, 03 Apr 2026 07:47:33 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>hourly</sy:updatePeriod>
	<sy:updateFrequency>1</sy:updateFrequency>
	<generator>https://wordpress.org/?v=4.2.37</generator>
	<item>
		<title>Understanding the classifications of LLM (Large Language Models) deployment</title>
		<link>https://mauverick.com/understanding-the-classifications-of-llm-large-language-models-deployment/</link>
		<comments>https://mauverick.com/understanding-the-classifications-of-llm-large-language-models-deployment/#comments</comments>
		<pubDate>Thu, 17 Apr 2025 10:18:53 +0000</pubDate>
		<dc:creator><![CDATA[Abhijit Bangalore]]></dc:creator>
				<category><![CDATA[ABHIJIT BANGALORE]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Adoption]]></category>
		<category><![CDATA[AI Governance]]></category>
		<category><![CDATA[Organizational AI Governance]]></category>

		<guid isPermaLink="false">https://mauverick.com/?p=2869</guid>
		<description><![CDATA[Large Language Models (LLMs) have revolutionised the way we interact with technology, but not all LLMs are created equal. In this blog, let’s explore the two main types of LLMs: Closed loop &#38; Open loop models. We&#8217;ll dive into their fundamental differences, examine their pros and cons, and discuss how these differences impact their applications. [&#8230;]]]></description>
				<content:encoded><![CDATA[<p>Large Language Models (LLMs) have revolutionised the way we interact with technology, but not all LLMs are created equal. In this blog, let’s explore the two main types of LLMs: Closed loop &amp; Open loop models. We&#8217;ll dive into their fundamental differences, examine their pros and cons, and discuss how these differences impact their applications. By understanding the unique characteristics of each model, you&#8217;ll be better equipped to make informed decisions about which type of LLM is best suited for your needs. Let&#8217;s get started by looking at how these models are built and what sets them apart.</p>
<p><strong>Closed Loop LLM </strong></p>
<p>(Examples: GPT-4 (RLHF), Claude, Gemini, RAG systems)<br />
In a Closed Loop LLM, the model&#8217;s self-learning capability is developed and refined through human interaction and usage. This approach is akin to learning through mistakes and corrections by experience. The model adapts and improves based on the feedback it receives from users, which can lead to more accurate and contextually appropriate responses over time.</p>
<p><strong>Characteristics of Closed Loop LLM</strong>:</p>
<p><a href="https://mauverick.com/wp-content/uploads/2025/04/LLM-Blog.001.jpeg"><img src="https://mauverick.com/wp-content/uploads/2025/04/LLM-Blog.001-300x225.jpeg" alt="LLM Blog.001" width="500" height="225" class="alignnone size-medium wp-image-2872" /></a></p>
<p><strong>Usage of Closed Loop Models</strong>:</p>
<p style="padding-left: 30px;">•<strong> Chatbots with User Feedback Mechanisms</strong>: Customer service chatbots incorporate mechanisms for users to provide feedback on the responses they receive, which is then used to improve the model<br />
• <strong>Virtual Assistants Learning from User Interactions</strong>: Virtual assistants like Siri, Alexa, and Google Assistant learn and improve from the interactions they have with users, adapting to individual preferences and common queries</p>
<p><strong>Closed Loop LLM Examples</strong>:</p>
<p style="padding-left: 30px;">1. <strong>Google Assistant&#8217;s Personalisation</strong>:</p>
<p style="padding-left: 60px;">• Description: Google Assistant learns from user interactions to personalise responses. For instance, if a user frequently asks about weather forecasts for a specific city, Google Assistant will adapt to provide quicker, more direct answers for that location.<br />
• Demonstrates: Adaptive learning and human-in-the-loop feedback, reducing the need for users to input the same queries repeatedly</p>
<p style="padding-left: 30px;">2. <strong>Chatbots in Customer Service</strong>:</p>
<p style="padding-left: 60px;">• Description: Many companies use chatbots on their websites that learn from customer interactions. For example, if a user frequently inquiries about return policies, the chatbot will improve its responses over time to better address such queries.<br />
• Demonstrates: Human-in-the-loop learning and the potential for reduced bias through diverse customer feedback.</p>
<p>3. <strong>Spotify&#8217;s Recommendation Algorithm</strong>:</p>
<p>Description: Spotify uses a form of Closed Loop LLM to recommend music based on user listening habits. The more a user listens to certain genres or artists, the more tailored the recommendations become.<br />
• Demonstrates: Adaptive learning and the ability to mitigate bias by learning from individual user preferences</p>
<p><strong>Open Loop LLM</strong> (Examples: LLaMA 2/3, BERT, RoBERTa, BLOOM, Falcon)<br />
In contrast, an Open Loop LLM relies on its training dataset for its learning and development, with minimal to no adaptation based on user interactions post-deployment. This method can increase the chances of bias and wrongful interpretations if the training dataset is not carefully curated and diverse.</p>
<p><strong>Characteristics of Open Loop LLM</strong>:</p>
<p style="padding-left: 30px;">• <strong>Static Learning</strong>: The model&#8217;s learning is primarily based on its initial training dataset, with little to no adaptation to new information or changing contexts post-deployment<br />
• <strong>Risk of Bias and Errors</strong>: If the training dataset contains biases or inaccuracies, the model may propagate these issues in its responses, potentially leading to harmful or misleading outputs<br />
• <strong>Efficiency in Controlled Environments</strong>: Open Loop LLMs can be highly effective in controlled or specific domain applications where the scope of queries is limited and well-defined<br />
<a href="https://mauverick.com/wp-content/uploads/2025/04/LLM-Blog.002.jpeg"><img class="alignnone size-medium wp-image-2873" src="https://mauverick.com/wp-content/uploads/2025/04/LLM-Blog.002-300x225.jpeg" alt="LLM Blog.002" width="500" height="225" /></a></p>
<p><strong>Usage of Open Loop Models</strong>:</p>
<p style="padding-left: 30px;">• <strong>Pre-Trained Language Models Without Fine-Tuning</strong>: Models like the original versions of BERT and RoBERTa, which were pre-trained on large datasets but not fine-tuned with specific user interaction data, fall into this category.<br />
•<strong> Domain-Specific Models Trained on Static Datasets</strong>: Models trained for specific tasks, such as medical diagnosis or legal document analysis, based on static datasets without ongoing user feedback loops</p>
<p><strong>Open Loop LLM Examples</strong>:<br />
1. <strong>BERT for Sentiment Analysis:</strong></p>
<p style="padding-left: 30px;">• Description: BERT (Bidirectional Encoder Representations from Transformers) was initially trained on a large corpus of text and then applied to sentiment analysis tasks without further training on user interactions. Its performance is based on the patterns learned from its initial training dataset.<br />
• Demonstrates: Static learning and the potential risk of bias if the training dataset does not represent diverse viewpoints.</p>
<p>2. <strong>Domain-Specific Medical Diagnosis Models</strong>:</p>
<p style="padding-left: 30px;">• Description: Some AI models are trained on static datasets of medical records and literature to diagnose diseases. These models do not learn from new patient interactions post-deployment.<br />
• Demonstrates: Efficiency in controlled environments and the risk of propagating biases or inaccuracies present in the training data.</p>
<p>3. <strong>Legal Document Analysis Tools</strong>:</p>
<p style="padding-left: 30px;">• Description: AI tools used in legal firms to analyse documents are often trained on large, static datasets of legal texts and precedents. They do not adapt to new legal documents or user feedback post-deployment.<br />
• Demonstrates: Efficiency in specific domain applications and the potential for errors if the training dataset is not comprehensive or up-to-date</p>
<p><strong>Conclusion</strong>: By distinguishing between Closed loop and Open loop LLMs, developers and organisations can make informed decisions about which approach best suits their specific applications. This choice rests on several critical factors, including the need for adaptability to new trends and user behaviours, the potential for bias in model outputs, and the value of integrating user feedback for continuous improvement. As the landscape of language model deployment continues to evolve, understanding these distinctions will be pivotal in harnessing the full potential of LLMs while mitigating their limitations</p>
]]></content:encoded>
			<wfw:commentRss>https://mauverick.com/understanding-the-classifications-of-llm-large-language-models-deployment/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
		<item>
		<title>AI Adoption from PoC to Production &#8211; Overcoming Impediments</title>
		<link>https://mauverick.com/impediments-of-ai-adoption-from-poc-to-production/</link>
		<comments>https://mauverick.com/impediments-of-ai-adoption-from-poc-to-production/#comments</comments>
		<pubDate>Wed, 06 Nov 2024 16:55:46 +0000</pubDate>
		<dc:creator><![CDATA[Abhijit Bangalore]]></dc:creator>
				<category><![CDATA[ABHIJIT BANGALORE]]></category>
		<category><![CDATA[ABHIJIT BANGALORE & SRIVIDYA SUDARSHAN]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Adoption]]></category>
		<category><![CDATA[AI Governance]]></category>
		<category><![CDATA[Organizational AI Governance]]></category>
		<category><![CDATA[PoC to Production]]></category>

		<guid isPermaLink="false">https://mauverick.com/?p=2414</guid>
		<description><![CDATA[AI Adoption from PoC to Production Overcoming Impediments In continuation of the AI blog series, this blog gives viewpoints on the impediments for AI adoption from PoC to production. When we take a look at the current AI adoption, we see that the chip makers are way ahead of their software counterparts, which in my view is a first [&#8230;]]]></description>
				<content:encoded><![CDATA[<h2 style="text-align: center;"><span style="color: #000000;">AI Adoption from PoC to Production</span></h2>
<h2 style="text-align: center;"><span style="color: #000000;">Overcoming Impediments</span></h2>
<p style="text-align: justify;"><span style="color: #000000;">In continuation of the AI blog series, this blog gives viewpoints on the impediments for AI adoption from PoC to production. When we take a look at the current AI adoption, we see that the chip makers are way ahead of their software counterparts, which in my view is a first we have seen over the past few decades!</span></p>
<p style="text-align: justify;"><span style="color: #000000;">What&#8217;s holding the adoption back? Is it proper use case fitment or non-availability of base data or vulnerability of current security frameworks to DDOS (Distributed Denial of Service) or the CSOs thinking that current systems or processes can’t handle protection policy (data, network)? Well, one can’t be sure! It can be a combination of all these points! And, this concoction is the reason why adoption of AI use cases from PoC to Production is pegged at a maximum of a measly 28% conversion rate.</span></p>
<p style="text-align: justify;"><span style="color: #000000;">Diving into the primary obstacles-</span></p>
<h3 style="text-align: justify;"><span style="color: #000000;">Primary Obstacles</span></h3>
<ol style="text-align: justify;">
<li><span style="color: #000000;"><b>Data Quality</b><span class="Apple-converted-space"> </span></span></li>
</ol>
<p style="padding-left: 30px;">Inadequate base data quality hinder AI model effectiveness.<br />
At least 30% of generative AI (GenAI) projects will be abandoned after proof of concept by the end of 2025, due to poor data quality, inadequate risk controls, escalating costs or unclear business value, according to Gartner, Inc. According to the IBM Big Data &amp; Analytics Hub, poor data costs the US economy $3.1 trillion every year. Research from Cognilytica points out that over 80% of the time in most AI and Machine Learning projects is spent on Data preparation and engineering tasks. This emphasises the need for high quality data obtained by leveraging the solutions and best practices in data preparation, management and engineering for implementing effective AI algorithms</p>
<ol style="text-align: justify;" start="2">
<li><b>Risk Controls</b><span class="Apple-converted-space"> </span></li>
</ol>
<p style="padding-left: 30px;">Inadequate risk management frameworks and controls.</p>
<p style="padding-left: 30px;">According to a recent survey report by Deloitte, tracking generative AI adoption and challenges, risk and governance are the among the top 4 major challenges companies have in implementing generative AI applications and tools. Only 23% of the companies were highly prepared for ri<br />
k and governance</p>
<ol style="text-align: justify;" start="3">
<li><b>Infrastructure Costs</b>: <span class="Apple-converted-space"> </span></li>
</ol>
<p style="padding-left: 30px; text-align: justify;">Prohibitive costs of infrastructure development and maintenance.<br />
In a recent Gartner Data &amp; Analytics Summit in Sydney, an analyst cited the cost of projects as a big pressure on generative ai deployment, with upfront investments ranging from 5 million to 20 million</p>
<ol style="text-align: justify;" start="4">
<li><b>Uncertain ROI</b>: <span class="Apple-converted-space"> </span></li>
</ol>
<p style="padding-left: 30px;">Difficulty quantifying financial benefits of AI use cases.</p>
<p style="padding-left: 30px;">As AI applications vary widely across industries, it is difficult to establish standardized benchmarks for expected ROI. Typically, around 90% of AI initiatives do not meet their ROI targets, primarily due to challenges in deploying models efficiently and aligning outcomes with business goals</p>
<ol style="text-align: justify;" start="5">
<li><b>Technical Expertise</b><span class="Apple-converted-space"> </span></li>
</ol>
<p style="padding-left: 30px;">Shortage of skilled professionals to create and maintain AI models.</p>
<p style="padding-left: 30px;">41% of businesses are struggling to find employees to support their generative AI initiatives, according to Enterprise Strategy Group&#8217;s recent survey</p>
<ol style="text-align: justify;" start="6">
<li><b>Trust </b>Trust of the models that are used, where the quality &amp; source of data is not known</li>
<li><b>Model Scalability and Performance: </b></li>
</ol>
<p style="padding-left: 30px;">According to a 2022 study by MLOps Community, 62% data scientists face issues with model performance when moving to production. PoCs are typically conducted in a controlled environment with limited resources. The AI model needs to be scaled to be moved into production. A robust deployment strategy needs to be in place to scale and serve concurrent users without degradation in performance</p>
<p style="text-align: justify;">That being said, there are areas where AI adoption is seeing greater traction, there are a few use cases that are into production successfully<span class="Apple-converted-space"> </span></p>
<h3 style="text-align: justify;">Successful AI Adoption Use Cases</h3>
<ul style="text-align: justify;">
<li><b>Productivity Measurement</b></li>
</ul>
<p style="padding-left: 60px; text-align: justify;">Retail, hospitality and customer service industries leverage AI to measure process and employee productivity.  According to the <a href="https://www.forbes.com/advisor/business/software/ai-in-business/">Forbes Advisor survey</a>, 53% of businesses apply AI to improve production processes, while 51% adopt AI for process automation and 52% utilize it for search engine optimization tasks such as keyword research</p>
<ul style="text-align: justify;">
<li><b>Data Analysis</b></li>
</ul>
<p style="padding-left: 60px; text-align: justify;">AI-driven data segmentation and ROI calculations are gaining traction.</p>
<ul style="text-align: justify;">
<li><b>Automation Agents</b></li>
</ul>
<p style="padding-left: 60px; text-align: justify;">Self-sustaining routines automate tasks efficiently. <a href="https://www.salesforce.com/news/stories/generative-ai-statistics/"><b>75%</b></a> of users want to leverage AI to automate workplace tasks</p>
<ul style="text-align: justify;">
<li><b>Code assistance</b></li>
</ul>
<p style="padding-left: 60px; text-align: justify;">Aid in software development. An <a href="https://www.techtarget.com/esg-global/research-report/code-transformed-tracking-the-impact-of-generative-ai-on-application-development-2/">Enterprise Strategy Group survey</a> of application developers found that 63% used generative AI in production, citing faster code creation and improved customer support as top benefits.<span class="Apple-converted-space">  </span>Business executives observe <a href="https://ventionteams.com/solutions/ai/adoption-statistics">55% developer productivity</a> improvement due to generative AI adoption</p>
<h3 style="text-align: justify;">Challenges</h3>
<p style="text-align: justify;">To bridge the gap between PoC and production, it&#8217;s crucial to address the following challenges:</p>
<ul style="text-align: justify;">
<li><b>Data Quality Improvement</b></li>
</ul>
<p style="padding-left: 60px; text-align: justify;">Ensuring reliable, high-quality data by defining a data governance framework to ensure accuracy, completeness, consistency, and timeliness of the data used in AI models</p>
<ul style="text-align: justify;">
<li><b>Risk Management Frameworks</b></li>
</ul>
<p style="padding-left: 60px; text-align: justify;">Developing and implementing robust risk controls</p>
<ul style="text-align: justify;">
<li><b><b>Cost-Effective Infrastructure</b><br />
</b></li>
</ul>
<p style="padding-left: 60px; text-align: justify;">Exploring cloud-based, scalable solutions. Platforms like AWS, Google Cloud Platform (GCP), and Microsoft Azure offer specialized AI and machine learning services that allow businesses to scale resources up or down based on demand. Applications that require real time processing such as Self-driving cars, wearable devices, and smart home appliances can leverage edge AI which enables models to be deployed closer to where the data is generated, reducing data transfer costs and reliance on centralized cloud processing</p>
<ul style="text-align: justify;">
<li><b>Clear ROI Definitio<br />
</b></li>
</ul>
<p style="padding-left: 60px; text-align: justify;">Establishing measurable financial benefits. A study from <a href="https://www2.deloitte.com/us/en/insights/industry/technology/artificial-intelligence-roi.html">ESI ThoughtLab and Deloitte</a> found that organizations with mature AI strategies generally see higher ROI as these companies focus on robust data management, effective tracking, Privacy, security, ethics and strong alignment with strategic goals from the onset</p>
<ul style="text-align: justify;">
<li><b>Skill Development</b></li>
</ul>
<p style="padding-left: 60px; text-align: justify;">Investing in AI talent acquisition and training. Businesses need to focus on upskilling, knowledge sharing, and fostering a culture of continuous learning among existing employees by investing in the right AI training programs for data engineers, developers and analysts</p>
<p style="padding-left: 60px; text-align: justify;">Embracing AI governance after tackling these impediments, organizations can unlock the full potential of AI, driving successful adoption from proof of concept to production. However, there are concerns emerging as AI adoption is set to increase multifold</p>
<h3 style="text-align: justify;">Emerging Concerns</h3>
<ul style="text-align: justify;">
<li><b>Data Governance</b></li>
</ul>
<p style="padding-left: 60px; text-align: justify;">AI output, data publication and usage raise concerns around copyright laws, data infringement, liability, and monitoring</p>
<ul style="text-align: justify;">
<li><b>Regulatory Compliance</b></li>
</ul>
<p style="padding-left: 60px; text-align: justify;">EU and UK AI liability and copyright laws require adherence, necessitating AI governance and monitoring standards</p>
<p style="text-align: justify;"><span style="color: #000000;"><br />
In my next blog, let’s look into the governance and regulatory boxes. Until then…<span class="Apple-converted-space"> </span></span></p>
<p style="text-align: center;">&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;-<img class="  wp-image-2415 aligncenter" src="https://mauverick.com/wp-content/uploads/2024/11/2-stand-back-300x171.jpg" alt="2-stand-back" width="519" height="295" />&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8212;&#8211;</p>
<p style="text-align: left;"><em><span style="color: #000000;">References<b>:<span class="Apple-converted-space"> </span></b></span></em></p>
<p style="text-align: justify;"><em><span style="color: #993366;"><a style="color: #993366;" href="https://www.recodesolutions.com/why-enterprises-struggle-to-move-gen-ai-led-automation-beyond-pilot-projects-and-into-real-applications/">https://www.recodesolutions.com/why-enterprises-struggle-to-move-gen-ai-led-automation-beyond-pilot-projects-and-into-real-applications/</a></span></em></p>
<p style="text-align: justify;"><em><span style="color: #993366;"><a style="color: #993366;" href="https://www.linkedin.com/pulse/from-prototype-production-overcoming-ai-deployment-hurdles-xzr1c/">https://www.linkedin.com/pulse/from-prototype-production-overcoming-ai-deployment-hurdles-xzr1c/</a></span></em></p>
<p style="text-align: justify;"><em><span style="color: #993366;"><a style="color: #993366;" href="https://www.techtarget.com/searchenterpriseai/feature/Survey-Enterprise-generative-AI-adoption-ramped-up-in-2024">https://www.techtarget.com/searchenterpriseai/feature/Survey-Enterprise-generative-AI-adoption-ramped-up-in-2024</a></span></em></p>
<p style="text-align: justify;"><em><span style="color: #993366;"><a style="color: #993366;" href="https://fair.rackspace.com/insights/eight-blockers-transitioning-ai-production/">https://fair.rackspace.com/insights/eight-blockers-transitioning-ai-production/</a></span></em></p>
]]></content:encoded>
			<wfw:commentRss>https://mauverick.com/impediments-of-ai-adoption-from-poc-to-production/feed/</wfw:commentRss>
		<slash:comments>0</slash:comments>
		</item>
	</channel>
</rss>
