Can GPT-4 *Actually* Write Code? – by Tyler Glaiel

I test GPT 4’s code-writing capabilities with some actual real world problems.

Source: Can GPT-4 *Actually* Write Code? – by Tyler Glaiel

Can these new large language models really replace software engineering? GPT is showing that it can write trivial code, with well-defined inputs and outputs, but I work on very complicated applications, and the trouble is specifying the problem we’re trying to solve. I’ve thought a bit about trying this exercise with my own software, that is, telling GPT the general issue we’re trying to address, and seeing what it comes up with. The difficulty is that it took months for me to understand the depth of what is going on, so it would be very hard for me to boil it down to a prompt.

I was at Purdue University, studying mechanical engineering, in the late 80’s. An electrical engineering friend had gotten an internship at a Fortune-100 company. I marveled, but he explained that, as a “new guy” at a monstrous company, you would spend you time… oh, I don’t know… designing a very specific screw until you work your way up the ladder for a decade or two.

From the start of my career, I fell into writing software for fellow engineers in manufacturing companies, and I’ve been a full-stack guy, inventing new things, for about 27 years now. It’s been very intellectually rewarding. Unfortunately, I make a lot less than I could make in a coastal city, working for a non-manufacturing, “internet”-type company. My total career compensation is likely staggeringly smaller than than it could have been.

But when I think about chucking this approach, and trying to leverage my experience to get a job at an “software” company, I go back to my buddy’s comment from 30 years ago. What is intellectual satisfaction worth to you? To me, it works out to being worth literally millions of dollars in career earnings, I guess.

As a picture-perfect example of being able to do big, novel ideas in software, I find myself in a unique position to try to make my own model to do, essentially, what a lot of the engineers at my company do. I have the data. I have the freedom to spin up whatever infra I need in the cloud. I have the ability and the time to learn machine learning, which I’ve already started. If it works, some managers will love me, and a lot of engineers will hate me. It’s basically the story of my career, just writ larger this time. Yay.

The Problematic Black Box Nature of Neural Networks and Deep Learning – Brightwork Research & Analysis

Neural networks and deep learning are normally black box systems. This black box nature of neural networks leads to problems that tend to be underemphasized in the rush to promote these systems.

Source: The Problematic Black Box Nature of Neural Networks and Deep Learning – Brightwork Research & Analysis

I find this article absurd. If I were to create a neural network, the very second thing I would program into it would be the capability for it to log WHY it did the thing I programmed it to do. Are you really telling me that the tools available to us right now are incapable of this?