#### Thoughts
I'm having a truly rough time sorting out how I feel about this one, and as time goes on it only gets more.. persistently unclear? So this is gonna be a very disjointed write up of thoughts, just to try to iron things out. Not meant to be formal, not meant to be definitive, hell not even really meant to be my stance or position as it's literally me trying to figure that out myself.
In general I think I'm far, far, far from alone in being annoyed at how many tech companies have inserted AI assistants into things that were extremely functional before hand and now are often even worse. Amazon took away the ability to search reviews and forces you to trust an AI to do it for you. It's absolutely shit. I'm a Kagi user, so I haven't had to deal with Google's search bullshit but I've seen plenty of examples there that would be comical if they weren't the result of serious searches.
Even in cases where I'd been genuinely a little impressed with AI/LLM based tools however, I just haven't felt the need to use them. Apple's on device tools for example. I don't mind the notification summaries, but I was already barely glancing at notifications anyways. The writing tools perform better than I would've ever expected. I think they're genuinely neat even. However outside of poking at them like a toy on occasion, typically right after an update, I find myself never bothering to use them again.
I think the above gets to the heart of why in part they've become such a big problem. These things are at their core a very neat toy, that looks like it could be a very useful tool. But looks like and is aren't the same things.
Now don't get me wrong. Machine learning has so many cool applications. This isn't about that. This is all about tech companies desperate to find and delivery the **next big thing**. Or at the very least, do their damnedest to make what they're currently offering into that by brute force.
It feels like the tech emperors are walking around with no clothes on. Or at least, it mostly does? I also know plenty of folks that *are* using these tools and quite happy with them. Enough that I find myself wondering.. am I missing something?
Given how wildly LLMs can go off script either with a poor assumption or outright hallucinating though, if I have to sanity check the output on anything that's genuinely important. How much time is it really saving me? If it wasn't really important enough for my time to begin with won't I save even more time if I just ignore it entirely?
Maybe something can come along and change my mind here, but it hasn't yet, and I don't see much sign of it coming.
Oh, and one last note. I haven't even gotten into the ethics of all of this too. If these tools require copyright and other cases of IP to go out the window.. and aren't sharing their content completely and openly. Oh absolutely fuck off. Likewise, generative AI art can just rot in hell. I'd rather pay an artist and get what I want from who I want. I absolutely don't see my mind getting changed here.
#### Updates
Update. Teasing out more of what annoys me I played with things a bit and that lead to [[AI Will Confidently Lie]].
Update update, again [[AI is an Illusion]].
What really concerns me is just the sheer confidence in the lies. The damn thing has no problem gaslighting the user. It really is a clever illusion of understanding, but that understanding breaks down when pressed too hard, and the idea of counting on that is pretty much insane to me. Using it as a tool to help sort things out? Maybe. Especially if you're going to be diving in closer yourself. But counting on the output on a critical application? Truly courting disaster.