Monthly Archives: April 2024

Initial thoughts on the White House Executive Order on AI — Part 1

NOTE: The views expressed here do not reflect those of my employer. In October 2023, the White House issued an executive order on AI. This predictably had a rush of commentators arguing on whether or not the government should be … Continue reading

Posted in Uncategorized | Tagged , , , | 1 Comment

Tracking AI “Incidents”

The field of cybersecurity is influencing how we study and discuss safety and security issues related to AI-driven applications. As I discussed previously, there’s the good (reusing existing robust methodologies) and the not so good (creating a lot of FUD) … Continue reading

Posted in Uncategorized | Tagged , , | Leave a comment

Fairness in AI: A few common perspectives around this topic

The increasing use of AI in critical decision-making systems, demands fair and unbiased algorithms to ensure just outcomes. An important example is the COMPAS scoring system that aimed to predict the rates of recidivism in defendants. Propublica dug into the … Continue reading

Posted in Uncategorized | Tagged , , , | 1 Comment

Jailbreaking LLMs: Risks of giving end users direct access to LLMs in applications

All open LLM models are released with a certain amount of in-built guard rails as to what they will or will not answer. These guard rails are essentially sample conversational data that is used to train the models in the … Continue reading

Posted in Uncategorized | Tagged , , , | 1 Comment

Powering the Gen AI transformation

For most of us in the tech industry, the transformative changes of LLMs and generative AI are quite apparent. However, what might be hidden for many is the physical infrastructure powering this LLM-based transformation. Living in Ashburn, Virginia though, I … Continue reading

Posted in Uncategorized | Tagged , , , | Leave a comment

“System 1-System 2” thinking and AGI

My post on Moravec’s paradox, felt a bit incomplete. I wanted to expand a bit more was on the probable causes behind the paradox as well as add some more thoughts on Yann Lecunn’s theory on what it would take … Continue reading

Posted in Uncategorized | Tagged , | Leave a comment

Moravec’s Paradox: AI for intellectual tasks and AI for physical tasks

I recently learned about Moravec’s Paradox. First framed by Hans Moravec, it is an observation that AI (or computers in general) is good at tasks that we humans consider complex but is bad at tasks that we humans consider very … Continue reading

Posted in Uncategorized | Tagged , | 1 Comment

AI History: The Dartmouth Summer AI Research Project of 1955

As fun as it is to learn about the latest updates in the world of AI, I also find it very interesting to learn about the history of this field. One fascinating project was the “Dartmouth Summer Research Project on … Continue reading

Posted in Uncategorized | Tagged , , , , | Leave a comment

Improving reasoning in LLMs using prompt engineering

Getting machines to perform reasoning tasks has long been a cherished goal of AI. These problems include examples such as word problems in mathematics and analytical commonsense reasoning (the kind that you typically see in standardized tests such as SAT/GRE … Continue reading

Posted in Uncategorized | Tagged , , , | Leave a comment

Examining claims of biosecurity risks from Open Foundation Models

One of the primary drivers of regulatory efforts around Generative AI and foundation models is the fear of societal harm from the models. There are many claims, including from some highly respected AI experts, that this technology has the power … Continue reading

Posted in Uncategorized | Tagged , , , , | 1 Comment