Author Archives: Vinay Igure
Powering the Gen AI transformation
For most of us in the tech industry, the transformative changes of LLMs and generative AI are quite apparent. However, what might be hidden for many is the physical infrastructure powering this LLM-based transformation. Living in Ashburn, Virginia though, I … Continue reading
“System 1-System 2” thinking and AGI
My post on Moravec’s paradox, felt a bit incomplete. I wanted to expand a bit more was on the probable causes behind the paradox as well as add some more thoughts on Yann Lecunn’s theory on what it would take … Continue reading
Moravec’s Paradox: AI for intellectual tasks and AI for physical tasks
I recently learned about Moravec’s Paradox. First framed by Hans Moravec, it is an observation that AI (or computers in general) is good at tasks that we humans consider complex but is bad at tasks that we humans consider very … Continue reading
AI History: The Dartmouth Summer AI Research Project of 1955
As fun as it is to learn about the latest updates in the world of AI, I also find it very interesting to learn about the history of this field. One fascinating project was the “Dartmouth Summer Research Project on … Continue reading
Improving reasoning in LLMs using prompt engineering
Getting machines to perform reasoning tasks has long been a cherished goal of AI. These problems include examples such as word problems in mathematics and analytical commonsense reasoning (the kind that you typically see in standardized tests such as SAT/GRE … Continue reading
Examining claims of biosecurity risks from Open Foundation Models
One of the primary drivers of regulatory efforts around Generative AI and foundation models is the fear of societal harm from the models. There are many claims, including from some highly respected AI experts, that this technology has the power … Continue reading
Examining Benefits & Risks of Open Foundation Models
As new technologies entry the market, we see government regulations come up due to a few reason. The wave of recent AI regulations are largely being positioned as addressing the first two reasons. Although many commentators make the claim that … Continue reading
Open Models: The focus of AI Governance
Every new technology ushers in both excitement and concerns. The latest innovations in AI with the introduction of large language models is no exception. The quantum leap in progress demonstrated by these models caught most people, even those working in … Continue reading
New Data on AI Policy and Governance
The Stanford University: Human-Centered Artificial Intelligence center’s 2024 AI Index has been released. It collects and shares highly relevant data to the broad field of AI, covering topics ranging from latest investments in AI research and development, AI in fields … Continue reading
A simple taxonomy to guide your LLM prompt engineering
Large language models (LLM) have truly democratized AI development. Software engineers can now develop many AI applications without the need for dedicated model development. These LLMs are a one-stop shop for performing a variety of natural language processing (NLP) tasks. … Continue reading