Today I had a little aha moment. If anyone asked me yesterday about AI tools integrated into their editor, I would say its a bad idea. Ask me today, I would still say its bad idea. :D Because I don’t want to rely on AI tools and get too comfortable with it. Especially if they are from big companies and communicate through internet. This is a nogo to me.
But since weeks I am playing around with offline AI tools and models I can download and execute locally on my low end gaming PC. Mostly for playing with silly questions and such. It’s not integrated in any other software, other than the dedicated application: GPT4All (no it has nothing to do with ChatGPT)
I’m working on a small GUI application in Rust and still figure out stuff. I’m not good at it and there was a point where I had to convert a function into an async variant. After researching and trying stuff, reading documentation I could not solve it. Then I asked the AI. While the output was not functioning out of the box, it helped me finding the right puzzle peaces. To be honest I don’t understand everything yet and I know this is bad. It would be really bad if this was a work for a company, but its a learning project.
Anyone else not liking AI, but taking help from it? I am still absolutely against integrated AI tools that also require an online connection to the servers of companies. Edit: Here the before and after (BTW the code block in beehaw is broken, as certain characters are automatically translated into <
and &
for lower than and ampersand characters respectively.)
From:
pub fn collect(&self, max_depth: u8, ext: Option<&str>) -> Files {
let mut files = Files::new(&self.dir);
for entry in WalkDir::new(&self.dir).max_depth(max_depth.into()) {
let Ok(entry) = entry else { continue };
let path = PathBuf::from(entry.path().display().to_string());
if ext.is_none() || path.extension().unwrap_or_default() == ext.unwrap() {
files.paths.push(path);
}
}
files.paths.sort_by_key(|a| a.name_as_string());
files
}
To:
pub async fn collect(&self, max_depth: u8, ext: Option<&str>) -> Result {
let mut files = Files::new(&self.dir);
let walkdir = WalkDir::new(&self.dir);
let mut walker =
match tokio::task::spawn_blocking(move || -> Result {
Ok(walkdir)
})
.await
{
Ok(walker) => walker?,
Err(_) => return Err(anyhow::anyhow!("Failed to spawn blocking task")),
};
while let Some(entry) = walker.next().await {
match entry {
Ok(entry) if entry.path().is_file() => {
let path = PathBuf::from(entry.path().display().to_string());
if ext.is_none() || path.extension().unwrap_or_default() == ext.unwrap() {
files.paths.push(path);
}
}
_ => continue,
}
}
files.paths.sort_by_key(|a| a.name_as_string());
Ok(files)
}
I love using ai to assist with programming.
I often use it for boring stuff like mass refactoring or generating regex or SQL.
I wouldn’t use it to write big chunks of code, more to figure out how to do things.
It’s like an interactive docs that I can ask follow up questions to.
“mass refactoring or generating regex or SQL” sounds a lot like “big chunks of code” tho. SQL and especially regex is stuff you need to write yourself in order to really understand it.
Regex is very rarely big and SQL is often just to figure out the best way to query something.
Refactoring is usually something like rewrite these 10 c# classes into records.
I do understand it. I just don’t want to bother writing it. I can validate the output easily. Works out much quicker overall and often catches edge cases I may not have thought about.
If you’re interested in learning more about SQL, throwing EXPLAIN at your query and the AI’s version may be really interesting.
I’m usually perfectly happy trusting my ORM, but even then it’s really helpful to dig a little deeper to figure things out, both in development and in production.
I know SQL really well but avoid writing it whenever possible.
Getting ai to explain blocks of code is a really good use case for them.
“mass refactoring or generating regex or SQL” sounds a lot like “big chunks of code” tho. SQL and especially regex is stuff you need to write yourself in order to really understand it.
yeah I wouldn’t use it for doing the whole thing, but have been using it here and there to help me figure out why I’m getting errors and how to solve them. As a newbie programmer it helps guide me back to the right path.
However… even that doesn’t work all the time because it’s still not smart enough to isolate issues coming from imports, plus some other minor hiccups.
It’s just another tool. For me personally, at worst it’s like advanced rubber ducky programming. As long as you have the discipline to not use code that you don’t understand you’ll be fine, but that goes for any resources, LLM or not.
Post the original code to !rust@programming.dev and point to where you got stock, because that AI output is nonsensical to the point where I’m not sure what excited you about it. A self-contained example would be ideal, otherwise, include the crates you’re using (or the
use
statements).Why is it nonsenical? It works and compiles and the output from my code and the new async code is the same. So I don’t know what the problem here is.
Alright. Explain this snippet and what you think it achieves:
tokio::task::spawn_blocking(move || -> Result { Ok(walkdir) })
@FizzyOrange@programming.dev I see and think understand what you guys saying. Guess cheating can do only so much. I will go back to the drawing board and write the function (inside of an impl block) from ground up with threading in mind and learn how to do it properly.
BTW, the snippet I pointed to, and the whole match block, is not incoherent. It’s useless.
It’s definitely not cheating. But you also do need to understand what you’re doing.
He’s right. It can give the correct answer but still be crazy incoherent code. Like
let sum = a + b + 2 * a + 3 - b + b - a - 3 - a;
Do you really want code like that in your project? IMO AI assistants definitely increase productivity, but you also definitely need to actually read and understand the code they output, otherwise you adding a ton of bugs and bad code.
I don’t mind AI for coding assistance. Sometimes I am writing a function and it suggests basically what I was going to write anyways, then I just have to hit tab and move to the next section. Sometimes I use it to add comment descriptions to my functions so I don’t have to type it manually. Sometimes I use it to spitball ideas.
I think the trick is to use it as a tool to make yourself better, not to do the work for you.
I sporadically use it. Sometimes it can pin point the specific algorithm I need that I didn’t know about.
AI is surprisingly helpful with providing a starting point. When you want a helloworld app, an example of how to use some part of a crate, or a code snippet showing how to take what you have and do something unusual with it, AI is super useful.
I would love to be able to get the same quality of AI locally as I do from ChatGPT. If that’s possible, please let me know. I’ve got two 3090s ready to go.
But for now, I’m just enjoying the fact that ChatGPT is free. Once they put up a pay wall, it’s back to suffering (or maybe/probably trying out some open-source models).
I agree. Even when copilot gets something completely wrong it’s usually easier to think “no that’s wrong, it should be this” than “ok the first line of code should be…”.
It completely solves the “blank page” problem.
gemma2 27b IT is reasonably good for computer related things. It’s a hugging face gguf model which is compatible with koboldcpp which means multi gpu acceleration.
deleted by creator
oobabooga is better than GPT4ALL. The software is better. You load gguf files using llama.cpp that is integrated with it.
Why is it better?
It runs smoother and with no memory bottlenecks. Besides, you can load any gguf you want. You are not limited by the LLMs offered by GPT4ALL
I have no comparison, so cannot say much. But what do you mean by memory bottlenecks? Isn’t this a limitation of the hardware itself? I mean if I have a memory bottleneck on GPT4All, then I would have it using with other software as well.
The app will freeze the computer if you use models that are too big. It also produces stuttering in the smaller models.
BTW both applications download from HuggingFace and I could just place any .gguf file downloaded manually too. And it runs through lamacpp too, so it shouldn’t be too different. GPT4All got a few big updates recently.
But I lack experience and am open for any alternative. Unfortunately oobabooga is not available through Flatpak (I’m on Linux). If it becomes available as Flatpak, I’ll give it a try. At the moment there is no hurry for me, as it works fine now. But the new Llama 3.1 128k is slow and I wonder if its a problem with AMD cards, with GPT4All or if this is a problem with this model in general. So that’s why I’m open to try other software.
ok. you run the start_linux.sh on oobabooga to run it on Linux. I’ve never run it on Linux, though.
Somewhere in GitHub’s docs, they address the difference between an AI AutoPilot and an AI Co-Pilot, and I think it’s the most useful distinction to navigate good vs bad uses of AI, today.
AI Co-Pilots can dramatically accelerate people atball experience levels, and seem to particularly shine for coding problems.
My bullshit meter still goes off whenever someone is selling an AI AutoPilot with the promise that they’re no need for any human staff guiding it.
AI AutoPilot’s for every use case are being developed. They’re just going to arrive far later and far lower quality (initially) than the loudest folks keep promising.
I’m shocked and appalled! Isn’t the whole point of using Rust to prove you’re a better developer by using an extra hard language? At least that’s what I like about it.
I’m kidding, of course. Whoever has never copied and pasted code they didn’t understand from Stack Overflow can go ahead and complain about using a local LLVM.
Ultimately, what makes a good developer includes understanding and being really good with the tools they use, but the biggest impact comes from identifying problems and asking the right questions to solve them.