Author Archives: Vinay Igure
The power of raw computation and the “bitter lesson” for AI research
Last week Rich Sutton and Andrew Barto were awarded the Turing Award for their pioneering contributions to the field of Reinforcement Learning. I covered their award in a previous post. Here I want to discuss an essay that Rich Sutton … Continue reading
The Turing Award for Reinforcement Learning Pioneers
This week Andrew Barto and Rich Sutton were awarded the ACM Turing Award, the highest award in the field of Computer Science, for their pioneering contributions to the field of Reinforcment Learning. With the advances in Artificial Intelligence over the … Continue reading
AI Agents in full force
2025 is becoming the year of Agentic AI. There were three big announcements in this field over the last couple of days. On March 5th, Microsoft rolled out its Agentic AI tools for sales agents. Microsoft has been all in … Continue reading
Google’s AI Co-Scientist: Demonstrating the power of AI Agents
One of the big promises of the new generation of large language models is their potential to transform how basic scientific research is done in fields such as medicine, biology, pharmaceuticals etc. The scientific research process in these fields currently … Continue reading
Sycophancy in LLMs
A recent paper from a group at Stanford claims that LLMs exhibit sycophantic behavior (SycEval: Evaluating LLM Sycophancy). They found that with the right set of prompts, LLMs exhibited this behavior in about 59% of cases with Google’s Gemini being … Continue reading
Thoughts on the White House Executive Order on AI — Part 2
In part 1 of this series, I examined the executive order (EO) in terms of its implications for federal departments that deal with national security issues. In this post, I’ll examine a few issues related to safety and fairness in … Continue reading
Initial thoughts on the White House Executive Order on AI — Part 1
NOTE: The views expressed here do not reflect those of my employer. In October 2023, the White House issued an executive order on AI. This predictably had a rush of commentators arguing on whether or not the government should be … Continue reading
Tracking AI “Incidents”
The field of cybersecurity is influencing how we study and discuss safety and security issues related to AI-driven applications. As I discussed previously, there’s the good (reusing existing robust methodologies) and the not so good (creating a lot of FUD) … Continue reading
Fairness in AI: A few common perspectives around this topic
The increasing use of AI in critical decision-making systems, demands fair and unbiased algorithms to ensure just outcomes. An important example is the COMPAS scoring system that aimed to predict the rates of recidivism in defendants. Propublica dug into the … Continue reading
Jailbreaking LLMs: Risks of giving end users direct access to LLMs in applications
All open LLM models are released with a certain amount of in-built guard rails as to what they will or will not answer. These guard rails are essentially sample conversational data that is used to train the models in the … Continue reading