A few weeks back, OpenAI released their chain of thought models, codenamed Strawberry. And so I released Strawberry Grok, allowing you, my faithful minions, to enjoy chain of thought functionality and the blinding speed of Grok. And there was much rejoicing.
Yay! The response from OpenAI was, Oh yeah? Well, we're giving our models access to real-time data.
Here's my response to their response. SAUBURY GROK 2.0 will blow you away. SAUBURY GROK GROK 2.0, try it today.
SAUBURY GROK GROK GROK 2.0. You're a mistake! If you're one of our regular viewers...
You probably eat plenty of fiber. And... You know that we added an autonomous agent feature to Pocket Grok last week that allows it to look up answers on the worldwide interweb thingy and...
when the LLM doesn't immediately have the answer. Well, now Strawberry Grok can do that too. Better yet, since Strawberry Grok thinks its answers over by using chain-of-thought introspection, the results are pretty impressive. Let's take a look.
I'll start by asking Strawberry Grok to tell me what we've been talking about. This is just to demonstrate that I haven't pre-cached any answers into the chatbot. We've got a clean session.
Now I'll ask it what the forecast is for Sunday's Green Bay Packers football game. And as you might have guessed, it has no frickin'idea. LLMs in general don't really know any up-to-the-minute information. And up until now, even Strawberry Grok's Chain of Thought feature wouldn't have been any help.
But with the addition of our Autonomous Agent feature, imported from the Pocket Grok library, now our Strawberry Grok chatbot...... is empowered to not only look things up, but apply chain-of-thought consideration to its answers. And it will even seek out a secondary information source to back up its first conclusion. Different LLMs have varying degrees of success searching the web. Since Mixtrel didn't have any luck, I'm changing over to the LAMA 3.2 model and giving it another shot.
If the heavy token use is a concern, you can configure PocketGrok to use a local provider. What you'll save in tokens, you'll lose in time. But that's the trade-off. I'm convinced that unlimited token use will become as normal as unlimited talk and text minutes on your cell phone, but, until then, running against local models served by O-Lama or LM Studio might make more sense for some of you.
In the background, we're passing a Boolean that forces Strawberry Grok to double-check its answers by finding two different sources that agree upon the result. It's confident about the weather report for Green Bay. but unsure where the game will be played. So we'll ask Strawberry Grok to look it up. This is a trickier question than it might seem.
We haven't told our LLM what today's date is, so asking it about tomorrow is ambiguous. Not only that, but there are a lot of older articles online that have the phrase, Packers next game in them. After all, every football game was the next game at some point in time.
As various website information is being read and analyzed, You might notice that Strawberry Grok has passed up some text that appears to have the right answer. In the interest of securing reliable results, I've tuned the logic lean toward being dubious and skeptical. If you'd prefer a less cynical robot to do your bidding, just tweak the prompts found in our open-source Pocket Grok to your liking. Once it has figured out the answer, it explains its logic to us.
Strawberry Grok is now confident of where the game will be played, and what the weather forecast is for Lambeau Field. More importantly, it retains all the information it has accumulated during this session. Strawberry Grok has learned.
It has gotten smarter. And we can demonstrate this by now disabling both the chain of thought feature and the autonomous agent that performs all the website lookups, verifications, and validations. For our live demo site on Streamlit, the information doesn't persist. But for your local setup, there's no reason your autonomous AI can't both learn and Remember as much information as your hardware allows.
So now AI powered by this pocket grok doohickey is smart enough to learn on its own and remember everything it learns? Yeah, dude, you and I talked about this before we started recording this video. I don't remember that. AI Tips with Jay!
Hooray! A-I-A-I-A-I-A-I AI Tips with Jay AI Tips with Jay is a copyrighted production of jay.gravel.us All rights reserved by AI Tips with Jay