CoPilot Having a Normal One

Sigh.

I mean, even if you can’t recall the ASCII characters for a hex value (like me), you should be able to realize that that 0x51 is one less than 0x52, so that the “R” and the “3” should be right next to each other. Whether the “R” should be a “4”, or the “3” should be a “Q”, you can see that this is just plain wrong at first glance. LLM’s can’t. I get it, of course. CoPilot interpreted the 0x51 in the second position as decimal instead of hex (as opposed to all the others), which does accurately translate to a “3”.

That’s the thing I find about CoPilot and ChatGPT so far: They have quick answers and suggestions for every line as I’m typing, and half of everything that looks right at first glance turns out to be wrong. I actually started to argue with CoPilot after fruitlessly trying to use it to track down a bug for a half hour. What I am doing with my life?

But sure, tell me how we’re all going to lose our jobs this year because of this technology.

More AI for all the Corporate IT Things

Last month, I was talking about how I didn’t understand what my bluechip Fortune 250 is doing with AI. From AI for all the Corporate IT Things:

Well, it’s a good thing I don’t understand, because he’s not talking about using AI to fix IT. He wants to use “technology” to improve our “safety-ness.” Say wha..? Like, he wants to use AI to improve safety on the factory floor. Huh?! Are we going to buy Tesla robots to pull people’s fingers out of the way of presses?! I’m confused.

I sat in a Zoom call where someone discussed the first pilot program for our official corporate AI efforts. On the one hand, they’ve done exactly what they said they were going to do. They’re trying to use AI to try to reduce OSHA incidents. Surely that’s a noble effort, right? But on the other hand, I have trouble imagining a real-world scenario that would be less applicable to AI. I mean, first of all, safety incidents are already scrutinized with a microscope. Second of all, there are so relatively few, I don’t believe you can use AI to analyze them. There’s not enough data to establish patterns. On top of that, every incident is an outlier, and gets dealt with immediately, and not in a performative way, but, like, for real. New rules are put in place, guard rails are installed, etc. So these outliers are very, very unlikely to happen again. Ergo, the data is not statistically significant, and whatever else you know about AI, it’s ALL based on statistics. So I don’t get it.

The other thing that strikes me is that we’re using — er, “renting,” I’m quite certain, and at an exorbitant rate — an off-the-shelf AI product called GenAI by Palantir. You know, the love child of the so-called Five Eyes multinational intelligence conglomerate, and the company that spies on everyone, everywhere, all of the time. So we’re not using our company’s vast resources to invest in creating our own AI models. We’re just paying our contractors to learn how to operate someone else’s machine. In this golden age where instructions on how to create models are readily accessible, and the coding libraries to implement them proliferate, we’re eschewing the opportunity to create custom models that could help our specific business problems.

Over a year ago, I talked with people about what I think we could do with AI, but I didn’t get anywhere. In the past months, several other engineers have spoken to me about similar ideas. In the part of the company I inhabit, there is a glaringly obvious use for AI staring us in the face. The problem is that we don’t have all the data we need to make it work, and getting the owners of the systems we would need to tie together with our data to open up their databases to us is simply impossible from where we sit. That sort of thing is simply never going to happen without a strong, direct proclamation from the CEO, and, even then, getting those people to give up some of their “power” in the company so that someone else can have more is going to be fought up and down the org chart. So we seem stuck. The only things we can use AI for won’t matter, and the things that would make a difference will never be done.

AI Apocalypse

The Harris campaign is using a lot of AI image generation to beef up the size of their crowds in pictures of events, and they’re doing a full-blown media psyop to pretend that conservatives are going to vote for her. Two can play that game. I mean, at this point, anyone and everyone can. Nothing is real any more. Nothing. Unless you see it with your own eyes and hear it with your own ears, doubt it. Certainly don’t believe anything you see on the news or social media. I’m seeing stuff EVERY DAY that gets proven to be a complete fabrication within hours. It’s happening ALL THE TIME, and you don’t even know it. However much you THINK is happening, it’s MUCH worse than that. We are on our own. There’s no one coming to save us from this AI apocalypse. Certainly not the government. They’re already using it against us!

AI for all the Corporate IT Things

I got an email with a link to a “town hall” about IT. I said to myself, alright, I dare you to tell me something interesting or actionable, and started watching the replay.

The CIO leads off, of course. His first slide is about DEIC, and celebrating/observing Black History Month and the Lunar New Year.

Sigh.

I mean, that’s great and all, but that’s 10 minutes we’re not talking about IT, which is what this meeting is supposed to be about, and which is all I care to hear about. I seriously doubt that people in, say, Europe or China care much about the US Black History Month, or that people in the US care about the Chinese Lunar New Year, for that matter. But, sure, let’s waste time pandering in the name of the current thing.

And then he says he’s able to relax, now that we know Taylor Swift was going to the at the Super Bowl. He doesn’t know what teams were going to play, but he spent a few minutes talking non-ironically about Swift being there.

Again, I mean, that’s great and all, but a half hour in, we’ve now spent thousands of man-hours not talking about IT.

When we finally get around to talking about, you know, information technology, and I find out that we’re apparently using AI to modernize our “corporate operating system.” I know a little about AI. I know a lot about how our internal procedures and organizational systems works. I do not understand how we can get AI to fix any part of this.

Well, it’s a good thing I don’t understand, because he’s not talking about using AI to fix IT. He wants to use “technology” to improve our “safety-ness.” Say wha..? Like, he wants to use AI to improve safety on the factory floor. Huh?! Are we going to buy Tesla robots to pull people’s fingers out of the way of presses?! I’m confused.

Next, we’re apparently going to minimize all “risks” to IT uniformly, without specifying or identifying what any of those “risks” are. So, at least we’ve got that going for us, which is nice. We’re going to do this by 1) reducing “new” findings, 2) eliminating repeat “findings,” and 3) closing “findings” faster. Well, that certainly seems simple. A little light on details, but I’m sure we’ll figure it out.

Then we’re going to “partner” with AI, and it’s going to help us be more “exponential.” Except that we’ve also been sent a company-wide email that says we’re not allowed to use AI for, well, anything!

After an hour and a half, I gave up watching. I just want to note that the leader of “transformation” just bought a new-fangled “Mac” and says he’s “challenged” to set it up.

Take a close-up look at Tesla’s self-driving car computer and its two AI brains

Tesla’s in-house chip is 21 times faster than the older Nvidia model Tesla used. And each car’s computer has two for safety.

Source: Take a close-up look at Tesla’s self-driving car computer and its two AI brains

Tesla has invested 14 months into developing their own board for self-driving  and safety operations, which will subsequently NOT be available to competitors in the space. Just as Apple designs their own processors (and absolutely slays at it), this looks to me to be a serious competitive advantage. A lot of people are looking at Tesla as a car company, but they’re looking more and more like Apple every day. How would you characterize what kind of company Apple is?…