I Played with Clawdbot all Weekend its insane
Speaker: Unknown speaker
This is Clawdbot. The ultimate personal AI assistant.
That is open source, runs locally, and can basically do everything. It is what Siri should have been. I'm going to break down everything you need to know about it. I'm going to show you how to use it, and by the end of this video, you are also going to be beyond impressed with Clawdbot.
It actually helped me researching and preparing for this video. I connected it to my Asana, I connected it to Grock.
So it has all current information, current tweets, current X posts that I wanted to include, and check this out.
So I said, put together a video outline. And here it is. And it even said, okay, you're using Obsidian, here's your hourly cron job, you're using LM Studio with GLM 4. 7, so it knew everything about itself, it put it all in here.
It pulled specific tweets, told me how many views they have, and it added it all programmatically to this Asana card. It can run on your machine. You can then connect it to any of the chat services that you use, like WhatsApp, Telegram, Slack, and chat with it directly from those channels. It is so cool to see.
It is essentially Claude code plus Claude cowork, but wrapped up with so much more capability and then directly accessible from wherever you are, even if you're not home. And so there are a number of ways in which this is better than other AI assistants that I've seen in the market.
One, it is fully open source. And you run it on your own machine. You can connect models from Gemini, from Open AI, you can even run models locally and use those, and you can mix and match as you wish.
So if you have really complex tasks that you need accomplished, go with Claude Opus 4. 5, but if you're running a cron job, for example, you could just run it locally and just have it going, and I'm doing that with LM Studio. It also has persistent memory.
So as you're using it, it learns about you, it learns what you like, it learns what you don't like, it learns different tasks that you do often, and that leads me to the third point, which is it is very proactive. You could tell it to do things like check my email and anytime you think I got a very urgent important email, go ahead and message me, give me a summary of it, and in fact, draft a reply. And of course, all of this requires you giving it access to a lot of different services that you use, and yes, you're giving your credentials to a non-deterministic system.
So there are some risks there. I'll talk about that a little bit later. And then last and probably most important is it has full computer access. You can limit it, you can put some guard rails on it, but you are essentially giving Clawdbot access to your computer, but that also means that it could write code, it can execute code, it can iterate on that code.
It is essentially what cursor or Claude code or Codex is doing locally.
This is what that looks like. You get some boundaries, we have a vibe, and you can edit it to be anything you want.
So if you have a certain personality in mind for your Clawdbot, you could just tell it, and it will behave like that. It can be more proactive, it can ask more questions up front, it can verify things before trying things, fully customizable, and again, because it's open source, and it already has a thriving community.
There are basically daily updates for Claude, and it seems everybody on X is talking about it right now. And also buying Mac Minis. I'll explain that in a moment.
So Clawdbot is a 24/7 assistant with access to its own computer. What if there were 10 or a hundred or a thousand, all running 24/7 in the cloud with access to your files, your Gmail, calendar, everything about you.
That's the future, and we're living it today.
Now, I'm going to break down exactly what access I gave it, and so you can kind of get a sense of how I'm using it.
But why is everybody buying Mac Minis and putting Clawdbot on it? Well, they basically want an isolated environment in which to install Clawdbot and to just go crazy, give it access to the entire system and not really worry about, oh, is it going to also have access to things that I don't want it to.
Now, I took a different approach. I just installed it on my main computer because my thinking is, if I'm giving it access to all of my credentials anyways, Gmail, Slack, Telegram, then it is going to have that even if I put it on its own machine.
But there's something cool also. And so here are just a few examples of the apps that it works with.
So here's WhatsApp, Telegram, which is what I'm using, Discord, Slack, Signal, iMessage, which is really cool, any of the frontier models that I mentioned, you could also use it with local models, I'll show you how to do that.
In fact, it kind of set it up itself. It's kind of wild to see this stuff. Spotify, Obsidian, you can give it access to Twitter, you can also give it Chrome access.
So you install a little Chrome extension and it can browse Chrome on your behalf. Remember, again, because it's installed locally.
So it's basically operating your computer on your behalf. It already has 50 native integrations, and what's really cool is, as I mentioned, there's a community, and they're already releasing skills.that people have already tested and proven work really well. And so one of my favorite things that I gave it was access to the Grock, GR OK API. And thus, as it's doing research for me, it has access to X, it has access to Twitter and everything going on there.
So it really is incredible at doing real-time research for me.
Here is my Clawdbot running in Telegram, and remember, I can run this even if I'm not home, and I can just say hi, and it starts typing back. And so it's actually reminding me about a previous task that I had going with it.
So let me tell you about this task and see if we can get it finalized.
So this is an example of something really useful that it can help you with. I have a bunch of videos locally, basically all the videos I've ever made for my YouTube channel, I save them on a hard drive, I zip it up, and now what I'm trying to do is upload them to Google Drive. And I tried to do that, but there are a lot of videos, as you know. And so halfway through, the upload broke.
So some files have been uploaded, others have not, and I didn't really see an easy way to do that comparison.
So I asked my Clawdbot to do it, and I said, run a comparison, here's the folder in drive, here's the folder locally, let me know which videos have been uploaded and which have not. And it ran a bunch of comparisons. And it figured out that there are 212 missing.
So now I'm testing it, and I'm saying, okay, let's try to upload one again. Maybe Claude can do that directly for me.
So we ran into some Google API errors because we got rate limited, but now I just said hi again about 15 minutes later, and it says, rate limit should be reset by now. Do you want me to try the upload again? Sure.
So now the rate limit for Google Drive has been removed, and it is working flawlessly. It ran a comparison of all the files that I've already uploaded to Google Drive against all the files that I have locally, made a list, saved it to my desktop. And now I'm just saying, go ahead, upload it for me, then update the list. Look at this, the .rar file says 211 files remaining.
Okay, do it again, choose a larger file.
There we go. Claude's working.
So I had it start to tell me which model it's using, so that's why it says Opus right there. Uploading Claude 4 overview.zip, 1. 0GB. Complete.
Uploaded: Claude 4 overview.zip, 1. 0GB. Drive link. Remaining: 210 files.
Want me to keep going? I can batch upload multiple files or continue one at a time. Okay, do one more.
There we go. And so it's actually reminding me about a previous task that I had going with it.think.and if I check back here,there it is. Just makes everything so easy. Well, let me show you what else we can do.
You can set up different cron jobs.
So, for example, every 10 minutes, go check my email, look for any urgent emails, summarize them, draft a reply, and then let me know.
Here I say, set up a cron job for every 5 minutes to check my email for any urgent emails that need response, summarize them, and then draft a reply and show me here.
So I hit enter and it starts working. It creates that cron job, it creates the code to go get those urgent emails. It does the determination itself of what counts as an urgent email versus not, and I actually tested it. I said, okay, tell me how you define urgent versus not, and it went out, it grabbed a bunch of example emails and it said, this is urgent, this is middle priority, and this is low priority, and it was extremely accurate.
Done, created cron job. Schedule: every 5 minutes. Found one unread email.
So this is interesting, it actually didn't find an urgent email and it corrected itself and it said, this is not urgent, standard cold outreach, no draft needed unless you want me to write a polite decline or interest response. I'll update the cron job to better filter out pitches/spam. Want me to adjust what counts as urgent? And I can simply say remove this cron job.
And now it's not going to run anymore. And so you can start to get very creative with the tasks that you give it, with the combination of memory and cron jobs and access to your computer, the ability to write and execute code locally. All of this stuff is really cool.
One other thing I mentioned is you can run local models.
By the way, if you want to run this locally, you can also run it on the sponsor of today's video. And a special thank you to Dell Technologies for sponsoring this portion of the video. Dell's Pro Max family of PCs are incredibly powerful for AI workloads. Using the new Grace Blackwell series of Nvidia GPUs, including GB300 and GB10.
These are absolute monster GPUs in your desktop. Learn more about Dell Pro Max GB10 and GB300 and the Dell Pro Max lineup of workstations.with Nvidia RTX Pro GPUs. Click the link in the description below, let them know I sent you, check it out.
So I'm running LM Studio, I downloaded JLM4, which is a mixture of experts model with built-in thinking, which I can't seem to disable the thinking. And I actually had ClaudeBot tell me like, hey, is this the best model to use since I want to use this, I want really fast responses and I want to use it for easier tasks, less sophisticated tasks. And it said, well, it takes about 4 to 6 seconds on average to get a response. I think we can get a better model.
And literally through Telegram, I had it tell LM Studio to download whatever model it wanted, and it chose Quen 3 mixture of experts without thinking.
So it's very fast. It downloaded this, remember, completely remotely. I was out, I was at a restaurant and I was telling it to do this. It was really mind blowing.
So LM Studio downloaded it and now we have access to it.
So just to show you it's working, run this prompt on Quen 3 in LM Studio, tell me a short story, then reply back with its response.
Now, I shouldn't have to explicitly tell it to run Quen 3 on LM Studio, and in fact, you can actually set up a daisy chain. In my daisy chain of large language models, I'm using Opus 4. 5, Haiku, and I'm using Quen 3 locally.
So here it is, Quen 3 response. And if I open up LM Studio, we can see the developer logs, this is it.
So we know it actually used the local model. And I can explicitly tell it, use the local model for cron jobs, for easy tasks, and it will remember to do that.
Now, it's not perfect, I will say that. Sometimes it just thinks, okay, I think I'm just going to use Haiku here, or I'm going to use Opus 4. 5. And I think with subsequent versions, subsequent updates of Claude, it's going to be better at knowing when to use which model, and if I tell it to use a specific model, it's going to be much better at actually listening to what I'm telling it.
But for now, it's pretty good.best utility you found so far. Email plus calendar awareness, boring but immediately useful, it checks my three inboxes and tells me what actually needs attention, drafted and sent with Clad.
That is based on what it is actually doing for me.
So I asked it to show me its memory file.
So let's take a look at what it knows about me so far.
So Matt's preferences, writing,always use the humanizer skill when drafting emails, articles, messages, or any writing for Matt. Announce item before using. I'll run this through humanizer. Goal, make sure anything I write sounds human, not AI generated.
Communication. Matt uses super human for email, auto-filtering. Slack messages as Matt must be prefixed with from Clad. Slack messages as the bot don't need a prefix.
Schedule. Early bird, usually up around 7:00 a.m. PST. Has a show called Forward Future Live.
Wife's calendar invites. Some are things Matt needs to know about, don't ignore them all, recurring stuff like sauna, just remind once the day before, not day of. Use judgment, if it looks like a joint event or something he'd attend, remind him. What's important for email calendar alerts, high priority, family school stuff, bills and payment reminders, medium, sponsor pitches, guest bookings, shipping notifications,low, newsletters, LinkedIn, marketing.
Tools and search strategy, grab for X Twitter, use Grok's X search via XAI API when searching X Twitter, use regular web search for everything else. Bird skill X Twitter CLI, read only. Never post, reply, like, retweet, or take any write action on Matt's behalf. Read operations only.
Last updated 2026-01-24.show me your soul. Again, the cool thing is you can basically modify it all you like directly from Telegram.
So it's like self-changing, self-evolving. It's kind of wild to think about. A couple other things, it uses sub agents, so you can have a conversation with it, kick off a sub agent, it's going to go do its thing in parallel and then come back when it's done and you can continue talking. It does not lock up synchronously.
You can also queue up multiple tasks.
So if I type tell me the date and then I say tell me the date again,I can queue those up and it will do it and then go to the next one.
So there you go.
So you can queue up as many tasks as you want. Obviously, this is quick and easy, but if you have a more complex task,you can set it off and do your next thing. It's not all perfect. Let me break down some of the issues.
One, certainly there is a security risk. You're dealing with non-deterministic systems and you're giving it access to really important things like Gmail and calendar and drive and whatever other services you want to give it access to, you're basically giving it to an AI to act on your behalf. Sometimes it's going to make mistakes, sometimes it's going to make an irreversible change that is painful for you.
So this is really right now for more power users. You really have to understand the consequences of what you're asking it. And it's really important to think about it as you're prompting it.
So for example, for that Google Drive task, I said, okay, tell me exactly what you're going to do before you do it and let's test it with one file before trying to upload all of them. It also has rough edges.
It is far from perfect. It is a project that is only two months old. I believe it's only a solo developer building it, but there is a growing community.
But that means there are problems. And just like any artificial intelligence system, there are things where you're looking at it, you're like, I'm pretty sure you should be able to do this or how did you make that mistake? How did you forget about that? And speaking of forgetting, memory compaction still is an issue and it's not specific to Clad, but any AI system, when you're compacting the memory, when you're taking all of the memory that you have given it, and at a certain point, it needs to start compressing that memory because you've hit the context window limit.
When you compress the memory, it loses detail.
So it might forget something that you told it in the past and you're like, oh my god, how do I have to keep telling you that? And the way to solve that is to continue to help it memorize those things.
So just tell it, no, explicitly write this down.
Crashes also happen. And I was out of my house yesterday and all of a sudden, I got into this weird tool call loop and it was basically broken. I could not use it anymore. Any message I would send, it would tell me that the tool call was malformed and I had to wait until I got home and I restarted the system and then it finally worked again.
And next, it's not free if you're using Clad, obviously. If you're using different frontier LLMs and you're giving Cladbot your API key, it's using tokens and it costs money.
So for example, just yesterday, I used 70 million tokens. The vast majority of which it chose to use Claude Opus 4. 5. Some you can see Haiku 4.
5 right there.
Then today, just so far and it's only 9:30 in the morning, 25 million tokens total.
Now, for costs, that is a lot more than I thought it would be. Holy crap. It is very expensive.
In fact, I'm just seeing it now. I'm kind of surprised.
So yesterday, I paid about $130.today,we're already up to $32.
So immediately, I'm thinking, how do I start using other models that aren't so expensive?
So that's where local models start coming into play.
So you really need to be careful, obviously, I need to be careful about costs. Very cool project. I think this is something special. I do recommend trying it out, just be careful, if you're really worried, install it on a VPS, but definitely give it a try and get a feel of what is likely to be the future of AI assistance.
Now, I mentioned this is what Siri should have been. And the one piece missing is some hardware device that I can actually speak to and have it talk back to me. Having to type everything out is great, but I really want to be able to have a voice assistant powered by Claude Bot.
So it would be cool if it worked via my phone, I'm sure there's going to be a skill released soon that allows for that.
But for now, it's all through Telegram or whatever chat app you're using.
Now, it does have TTS support.
So you can use your voice, but it all goes through whatever chat app you're using. Go try it out, let me know what you think, if you enjoyed this video, please consider giving a like and subscribe and I'll see you in the next one.