Tag Archives: LLMs

Fairness in AI: A few common perspectives around this topic

The increasing use of AI in critical decision-making systems, demands fair and unbiased algorithms to ensure just outcomes. An important example is the COMPAS scoring system that aimed to predict the rates of recidivism in defendants. Propublica dug into the … Continue reading

Posted in Uncategorized | Tagged , , , | 1 Comment

Jailbreaking LLMs: Risks of giving end users direct access to LLMs in applications

All open LLM models are released with a certain amount of in-built guard rails as to what they will or will not answer. These guard rails are essentially sample conversational data that is used to train the models in the … Continue reading

Posted in Uncategorized | Tagged , , , | 1 Comment

Powering the Gen AI transformation

For most of us in the tech industry, the transformative changes of LLMs and generative AI are quite apparent. However, what might be hidden for many is the physical infrastructure powering this LLM-based transformation. Living in Ashburn, Virginia though, I … Continue reading

Posted in Uncategorized | Tagged , , , | Leave a comment

Moravec’s Paradox: AI for intellectual tasks and AI for physical tasks

I recently learned about Moravec’s Paradox. First framed by Hans Moravec, it is an observation that AI (or computers in general) is good at tasks that we humans consider complex but is bad at tasks that we humans consider very … Continue reading

Posted in Uncategorized | Tagged , | 1 Comment

Improving reasoning in LLMs using prompt engineering

Getting machines to perform reasoning tasks has long been a cherished goal of AI. These problems include examples such as word problems in mathematics and analytical commonsense reasoning (the kind that you typically see in standardized tests such as SAT/GRE … Continue reading

Posted in Uncategorized | Tagged , , , | Leave a comment

Examining Benefits & Risks of Open Foundation Models

As new technologies entry the market, we see government regulations come up due to a few reason. The wave of recent AI regulations are largely being positioned as addressing the first two reasons. Although many commentators make the claim that … Continue reading

Posted in Uncategorized | Tagged , , , | 1 Comment

Resources for understanding Transformer Architectures

The current generative AI boom is built on the foundations of the Transformer architecture used to create the large language models (LLM). The technical details of the Transformer architecture was described in the Google paper that first introduced it: “Attention … Continue reading

Posted in Uncategorized | Tagged , , | Leave a comment