
Generative AI is revolutionising the digital landscape by enabling the creation of new, human-like content across various industries. Its impact is most profound among digital natives, who are seeing changes in how they interact with technology and services. This technology drives innovation in entertainment, education, and healthcare, making experiences more personalized, efficient, and creative, heralding a new era of digital interaction and value creation. In line with this focus, a panel discussion titled “GenAI-Driven Transformation at Scale in the Digital Native World” was held at the recent Elets Digital Natives Summit in Bengaluru. Our panelists engaged in in-depth discussions, sharing their insights and visions. Edited excerpts:
Pranav Saxena, Chief Product and Technology Officer, Flipkart, Healthplus

Pranav Saxena, Chief Product and Technology Officer, Flipkart, Healthplus commenced the discussion by stating, “The utilization and impact of emerging technologies, specifically in the context of e-commerce and broader applications, three primary themes are identified: the consumer impact, technological challenges, and solution development complexities.”

He said, “Firstly, from the consumer perspective, the efficacy of recommendations in e-commerce illustrates a pivotal concern. While algorithms can suggest numerous products, the value of these recommendations hinges on their ability to influence consumer behavior—leading to clicks or purchases within a narrow window. This underscores the need for recommendations not just to be numerous but impactful, enhancing the likelihood of conversion. Moreover, search algorithms have significantly advanced in understanding user intent, diminishing the struggle with finding relevant results except in cases of broad or non-contextual queries. The implication here is the importance of thoughtful consideration of return on investment (ROI) and use case definition to ensure the technology meets actual consumer needs.”
Secondly, the conversation shifts to the unique landscape of technology adoption and usage in India, highlighting cultural nuances like a preference for auditory over textual information and the need for summarization due to limited patience for lengthy content. This points to the necessity for adapting technology to fit different communication styles and information consumption preferences. The introduction of conversational AI, like ChatGPT, has revolutionised interaction paradigms, yet there’s an acknowledgment that different contexts may require varied forms of engagement beyond text-based interfaces.
“Thirdly, the technological infrastructure itself is examined. Issues such as API limitations, tokenization constraints, and context retention across extended interactions are identified as significant hurdles. These challenges suggest that while conversational AI presents a novel alternative to traditional search and interaction models, substantial investment is needed to enhance scalability and performance. Additionally, the computational cost associated with these technologies is considerably higher than traditional search engines, necessitating advancements in efficiency and cost-effectiveness to justify the ROI”, he added.
Lastly, the discussion encompasses the governance of outputs, including legal, ethical, and robustness considerations. The need for extensive investment in data management, governance structures, and technological refinement is emphasized, reflecting the evolving nature of regulatory and ethical standards in digital content creation.
Concluding the session he stated, “While emerging technologies offer profound opportunities for innovation and engagement, realizing their full potential requires careful consideration of user experience, significant technological enhancements, and robust governance mechanisms. The journey towards integrating these technologies into scalable, effective solutions entails a multifaceted investment, particularly in research and development, to navigate the complexities of modern digital ecosystems.”
Pavan Boggarapu, Head of Product, upGrad
The primary factor we face today is data quality and availability. Data is the fuel for any model. If you don’t have the right data, feeding incorrect data artifacts to the model will yield unsatisfactory outcomes. Your data should be fresh, complete, and unbiased. Whatever input you give to the model, the response must be in a consumable fashion. The main problem industries face is the need for fresh and vast data availability, shared Pavan Boggarapu, Head of Product, upGrad at Elets Digital Natives Summit.
He added the second key point is interoperability and compatibility. Integrating AI or conversational solutions into your ecosystem poses challenges. Companies often feel pressure from boards, investors, or management committees to implement AI solutions without considering their validity for the sector. This lack of interoperability and compatibility among systems can hinder progress significantly. Implementing a solution for systems that don’t communicate effectively is like playing cricket with a hockey bat.
Thirdly, cost and ROI play a significant role. While it’s fine to start implementing AI solutions, maintaining them requires the right skills and expertise. Some companies are even appointing Chief Generative Officers to align with trends, but this can be expensive.
“Bias and fairness are crucial concerns. Models might give the green light, but they might not highlight the broader picture or boundaries, leading to unforeseen consequences”, stated Pavan.
Lastly, legal and regulatory issues are essential. Models may not update you on legal changes, which can lead to pitfalls. It’s crucial to be cautious when leveraging models in areas where legal updates are frequent and unpredictable.
Shakti Goel, Chief Architect and Data Scientist, Yatra Online
Artificial Intelligence (AI) and Machine Learning (ML) are essentially a series of mathematical equations applied to data. We’re now dealing with millions, if not billions, of equations. By analyzing data, these models adjust their parameters and constants to improve accuracy and functionality. Machine learning, a key component of AI, spans several critical developments, shared Shakti Goel, Chief Architect and Data Scientist, Yatra Online.
Perceptrons, the foundation of neural networks, were introduced in the 1950s. The 1980s saw the advent of Recurrent Neural Networks, followed by Long Short-Term Memory (LSTM) models in the late 1990s, which revolutionized time series analysis. By 2013 and 2014, Generative Adversarial Networks (GANs) emerged, laying the groundwork for technologies like deepfakes.
He added, “The limiting factor was not mathematical theory or human ingenuity but computational power, which has seen exponential growth. For example, the Pentium 1 processor in the mid-90s had 3 million transistors; today, chips like NVIDIA’s H100 GPU boast 80 billion transistors. Similarly, storage costs have plummeted from $1 million for 1 terabyte in the late ’90s to around $30-$40 today. This surge is partly attributed to Moore’s Law, which predicted the doubling of transistors on a chip approximately every two years, enhancing performance while reducing costs.”
The advent of cloud technology has further democratized access to computational resources, allowing individuals and organizations to leverage powerful machines and algorithms without the need for physical ownership. This environment has been conducive to the development and deployment of Generative AI (GenAI), which relies on pre-existing ML technologies for functions like image or video creation.
Significant investment from large corporations such as Google, Microsoft, Amazon, Facebook, and NVIDIA has fueled these advancements. The development of technologies like GPT-3.5 involved substantial resources, highlighting the scale of investment in AI research and development.
Furthermore he added, “Advancements in network speeds have facilitated the transfer of vast amounts of data necessary for training sophisticated AI models. The progression from kilobit modems to gigabit-per-second home internet speeds exemplifies this growth.”
Looking ahead, the potential introduction of Quantum AI promises computational capabilities beyond our current understanding, potentially surpassing the processing power of classical computers by magnitudes unimaginable today. This evolution underscores the rapid pace of technological advancement in the fields of AI and ML, driving us towards a future where the possibilities seem boundless.
Ajit Narayanan, CTPO, Licious
The shift from conventional neural networks to artificial neural networks, particularly observed in the years around 2017 and 2018, marked a significant transition in my experience, shared Ajit Narayanan, CTPO, Licious. We extensively explored Long Short-Term Memory (LSTM) models, which were predominantly used for natural language processing. However, a notable limitation of these models was their sequential processing nature, hindering their ability to grasp complex relationships between words across extended contexts. This limitation somewhat constrained their application scope.
The landscape began to transform with the advent of the Transformer architecture, as introduced by Google in their seminal paper around 2017-2018. This innovation paved the way for subsequent developments, including OpenAI’s training of the initial version of ChatGPT with an impressive 175 billion parameters. This model’s capacity to predict and understand words within long context windows and its proficiency in generating coherent outputs were groundbreaking. Further enhancing its utility was the integration of a chat interface, making these advanced models accessible beyond the realm of data scientists.
“This accessibility revealed the potential of fine-tuning these models through intelligent prompting, altering our interaction with such technologies. Initially skeptical of ChatGPT’s longevity, I quickly realized its profound impact as it evolved from GPT-3 to GPT-3.5 Turbo, and then to GPT-4, with GPT-5 on the horizon. The insights derived from simply feeding data and prompts to ChatGPT have been astonishing, enabling me to conduct analyses independently, without reliance on data analysts”, he added.
Identifying clear use cases for this architecture became a natural next step. Customer support emerged as an obvious application, given the repetitive nature of consumer inquiries. By analyzing vast amounts of conversation data, these models can generate precise responses, confined to the trained domain. This precision in generating relevant answers has proven exceptionally valuable.
Internally, we’ve explored uses in creative and content writing, and analytics, discovering that the models can produce significant insights from properly prompted datasets. We’re experimenting with these models to analyze specific datasets, aiming to unearth insights beyond mere data retrieval.
While these applications are still in the experimental phase and not yet production-ready, the results have been overwhelmingly positive. This journey has reshaped our understanding of the potential applications of artificial neural networks, revealing a vast array of use cases from customer support to personalization, contingent on the construction of model inputs.
Rishi Srivastava, Director – Strategy & GTM, Exotel
“The increasing volume of data in today’s digital age necessitates the adoption of General AI (GenAI) to facilitate swift, accurate decision-making. This demand arises from the need to efficiently process vast datasets, a challenge that GenAI is uniquely equipped to handle. Furthermore, the iterative learning processes central to machine learning technologies underscore the importance of GenAI in achieving refined precision through repeated adjustments, exemplified by AlphaGo’s evolution to defeat a chess grandmaster after numerous self-play sessions”, shared Rishi Srivastava, Director – Strategy & GTM, Exotel.
Moreover, the specialized needs of diverse industries, such as the banking sector’s stringent authentication requirements for safeguarding personal data, illustrate the necessity for GenAI’s adaptability and learning capabilities. As industries are characterized by unique challenges and skillset demands, GenAI’s ability to learn and adapt to specific requirements is critical. The accelerated progress in computing and algorithmic complexity further solidifies the role of GenAI in addressing immediate computational problems, showcasing its indispensability in navigating the complexities of modern technological landscapes.
Be a part of Elets Collaborative Initiatives. Join Us for Upcoming Events and explore business opportunities. Like us on Facebook , connect with us on LinkedIn and follow us on Twitter.
"Exciting news! Elets technomedia is now on WhatsApp Channels Subscribe today by clicking the link and stay updated with the latest insights!" Click here!