There is no Vibe Engineering
19 points by serce
19 points by serce
In such a future, every engineer essentially becomes a mix of an architect and a platform engineer. We could even coin the term “Vibe Engineering” for this new role – but its definition would ultimately be the same as traditional software engineering. So, there is no distinct “Vibe Engineering” – just software engineering, perhaps involving less code typing than before.
It seems to be the case that “Software Engineering” as a title predates a definition of the term. Engineers are typically those that apply mathematics and scientific processes to a problem during the design, implementaiton, and operation of a system. This is, uh,… rare in “software engineering”.
Trying to make sense of these nonsense terms feels fruitless. I’d say we’ve been “vibe coding” all along but instead of models it’s some library someone threw onto github that may as well be a black box most of the time.
Since it hasn’t been linked yet, I think a reference to the evergreen Crossover Project is useful here.
According to the only useful demographic for answering this question (those who have been both traditional engineers and software engineers), there are a lot of similarities between the fields, and even some things that are more “engineering-ey” in SWE than traditional engineering (like version control).
This is likewise rare in traditional engineering disciplines too. Most of everyday engineering math amounts to simple arithmetic. When possible, reference solutions are used; I know this goes for EE, civil and mechanical engineering. When that isn’t possible, CAD analysis features are heavily employed. Only a small percentage of engineers working on frontier projects where no established solutions exist ever leverage serious math.
Not that much different from software really, although I do agree you can go much further in software knowing nothing about it.
I couldn’t say, but it sounds like they’re using math and science to do their work even in those cases. Even just the idea of designing things using tools backed by (even if abstracted by) mathematic models is so wildly divergent from SWE. Maybe the closest we even get to something close to that is like, benchmarks? Sort of? Or maybe type systems a bit? idk
Off the top of my head I would say it can easily get math-y if you do graphics, cryptography, hard real time system design. ML, like doing it for real not just using the libraries. Doing things like graph algorithms is aided greatly if you can reason in set builder notation. And yes I maintain it’s still SWE and not CS as long as you’re not writing any proofs.
It’s not to say that no software engineers do math or “engineering”, just that some software engineers seem not to. I don’t know what it’s like in other industries, maybe there are some Civil Engineers who don’t do math or science, or have a scientific approach. But definitely in SWE we very often (maybe usually?) jump into programming without a design process, without much more than a minimal happy path test suite, etc. I mean, notably, cryptography is almost never done by SWEs, it’s done by cryptographers. Usually these are pretty specialty roles where they’re often not even inhabited by something who programs a ton.
I suppose my ultimate point is that the term feels a big vague, especially when applied to SWE, but perhaps it’s not exclusive.
Our math is set theory instead of calculus and differential equations. There is something very mathematical about working with types and API design to restrict a system to desired behavior and make confident statements about its results to a set of inputs, which is basically all of software development.
types
I considered bringing type systems up but it feels wrong to say that you’re only doing engineering if you use a statically typed language. I also don’t think of most people really “doing” types or set theory etc. Some people get a little proof assistant helper and that’s kinda it.
Software engineers also use formal logic a lot more than other branches of engineering. I once simplified a convoluted conditional for a mechanical engineer and they thought I was a friggin’ wizard.
Sorry for being that guy, but when you code using dynamically typed programming languages, you are still doing types. It’s the language who is not doing explicit type checking at “compilation time”, but the types are still there.
This even applies to weak typing.
But still, you are right; you can do plenty of engineering if all your variables are doubles :D (but still, your functions still have arity, which is a type).
YOU CANNOT ESCAPE TYPING. YOU CAN ONLY PRETEND IT’S NOT THERE.
Have you programmed in assembly? All you have is some bits at some address with nothing to enforce a particular interpretation of said bits. It is a character? Integer? Pointer? String? Float? Structure? The assembler don’t care. Also, functions? We have subroutines. Data can be passed in memory, in registers, or both. And again, there’s nothing to enforce a subroutine is called correctly. Or that you called the proper entry point to a subroutine.
I indeed have. And you have opcodes for addition that treat those bytes as numbers. And in many CPUs, there are different addition opcodes for different numeric types.
Assembly would be closer to a weakly typed language. The memory positions have no “inherent” type (although I think some CPUs had tagged memory?), but assembly programmers must certainly care about types.
You can say that types are a construct in the programmer’s mind, but I would say there’s little that you can program without thinking about types.
I would maybe argue against this being a good example. Many (I am thinking especially of load store architectures) assembly languages are statically typed. They can have instructions that only operate on registers, instructions that operate on memory operands, instructions that operate only on specific subsets of registers (even in say x86 you can’t use avx instructions on arbitrary registers), etc and this can be checked statically by an assembler.
Can you name an assembly language that is statically typed? I would be very interested in seeing one. With NASN, yes, I can define memory with mnemonics like DB
and DD
but any “type” information they may provide isn’t used by NASM: the following assembles without error:
value dd 1234
mov ax,[value] ; no "type" checking
move byte [value],23 ; "byte" only becaues NASM
; isn't using the info
; provided by "dd"
movq xmm0,[value] ; still not "type" checking
There’s a difference between instructions (as you say, you can’t use AVX instructions on general purpose registers) and memory. As shown above, I can load a 16-bit integer from a memory location defined as a 32-bit value, or move a byte value into same (but a counter could be that I had to “cast” that instruction), or load a vector register (larger than 32-bits) from same said memory location.
And that still leave calling subroutines. When I write CALL FOO
there’s nothing the assembler can do (that I know of) to ensure I’m passing data to FOO
where FOO
expects it to be (either in registers or memory). In fact, there’s nothing preventing me from writing CALL FOO+3
(except if an assembler refuses to assemble a CALL
instruction in the presence of a full expression).
Everything I am talking about is a variant of the AVX example. Eg in ARM you can only add registers, you can’t use memory operands with add and an assembler would be within its rights to reject it at assemble time. The type of add() is register to register.
Likewise even in x86 you can’t mov to an immediate value (mov 7, 0
), you have to mov to a register or an address expression (mov [7], 0
).
It’s true that assembly doesn’t have subroutines, I wouldn’t say that means it has untyped subroutines it just doesn’t have subroutines. Even in a higher level language you can pass “parameters” through globals.
But registers (at least the general purpose registers in x86) are untyped. They’re just a collection of bits. Shifting an address doesn’t have semantic meaning, but how does the CPU know that EBX
contains an integer so it can do the SAR
vs. an address where such an operation is meaningless? Types inform the compiler or interpreter what operations are allowed on certain bits and which operations aren’t. In my example above, is the 32-bit value value
a 32-bit signed quantity? 32-bit unsigned quantity? A pointer to a structure? A pointer to an array? A pointer to some other function? You don’t know, because there’s not type the assembler can use to restrict the operations upon it. You are mistaking limitations in instructions (lie the ARM that can only add registers) with type information (what operations on the bits are allowed—again, are the bits an integer, or a pointer?).
Also, what you do you call the bits of code that are the target of a CALL
instruction? Personally, I call them subroutines. I never said they don’t exist in assembly, just that the assembler can’t detect you aren’t calling it with the proper data it needs.
The values in a register are untyped (or unityped). A register name has type register however. I am not saying registers are analogous to variables with types. If you think about the machine code of ARM this is maybe clearer, if you have an add instruction the destination is always a register, and never an address if it is 1 it is register 1 if it is 2 it is register 2 it is never address 1 or address 2. The first hole in the textual syntax of add
must be a register name, and if you put something other than a register name there it will fail to type check.
I never said they don’t exist in assembly, just that the assembler can’t detect you aren’t calling it with the proper data it needs.
My point is there is a difference between say
foo1(a, b)
foo2()
foo3(a: t1, b: t2)
and assembly can’t distinguish between 1 and 2 not just between 1 and 3. Its inability to check arguments of functions goes beyond the types, it can’t express that there are arguments outside of convention and documentation. You can write OO style code in C but it doesn’t have classes as a syntactic element.
I would generally refer to foo in the context of call foo
as a label.
A register name has type register however.
Eh? I don’t follow. If you mean, for example, that the A
register on the 6502 is an accumulator, X
and Y
are index registers, and S
is a stack register, then yes? That does kind of imply some form of type on what the register can be used for. But that’s far from universal. RISC-V has 32 general purpose registers. There are conventions about what ranges of registers are used for, but that’s ll it is—convention. ARM is similar. So is the VAX, only it has 16 general purpose registers, not 32 (and yes, there are some instructions that use certain registers on the VAX; that still doesn’t preclude said registers from being used as a general purpose register for other instructions).
The first hole in the textual syntax of add must be a register name, and if you put something other than a register name there it will fail to type check.
It’s a syntax error, not a type error.
Perhaps another example then. I’m currently writing a project in assembly. I have a structure where the first field is a pointer to a function. This is followed by an array of pointer to pointer to function. I spent over an hour going through traces of the code blowing up until I realized I had put a pointer to a function in the array of pointer to pointer to function. Because assembly is untyped, this went unnoticed by the assembler and by me. If this was Rust (or heck, even C) the compiler would have caught the error because of the type information the compiler tracks for each variable. Had I been able to tag “this word is a pointer to a function, these words are a pointer to a pointer to function” then yes, that would have caught the bug, but I don’t think that’s too viable in assembly.
Here’s another example. I’ll use the VAX because it will be much clearer (I think). The following code is fine:
x .long 3.1415926 ; floating point value
movl x,r1 ; load x into register
mulf2 #2.0,r1 ; multiply r1 by 2.0
Here we have a floating point value, we move it into R1
, then multiply it by 2.0. This is fine. A float times a float is a defined concept. No issues here. Contrast with this:
x .long 3.1415926 ; floating point value
movl #x,r1 ; load address of x into register
mulf2 #2.0,r1 ; multiply r1 by 2.0
Instead of loading 3.1415296 into R1
, I load the address into R1
, then multiply that by 2.0. To the CPU, that’s fine. Semantically, what does it mean to multiply an address by a float? If the assembler had types, this would be flagged, but since assembly is untyped (my position), it’s not (R1
will be interpreted as a floating point value, not converted from integer to float, because, again, there is no typing in assembly). Yes, this is a contrived example, but I can see it happening in a more complicated routine (mixing up if a register contains data, or the address of data).
It seems to me that because you believe that since the ARM (your example) can’t add directly to memory, that somehow conveys some form of type information. It doesn’t. I can load bits from memory that the programmer intended to be an address and treat it as a floating point value. No complaint from any assembly I know.
It’s a syntax error, not a type error.
Static type errors are a subset of syntax errors. All (static) type errors are syntax errors, but not all syntax errors are type errors,
This is maybe clearer with a different example, imagine in C:
*x = 2;
The value contained within x is irrelevant. An untyped C will generate the same code for this expression as regular C.
But a typed C can see
int x;
*x = 2:
vs
int *x;
*x = 2;
and decide not to generate code in the former case.
This is directly analogous to
add x, 1
An untyped assembly language can generate code no matter what x is. But if x is a label in one program and a register in another an assembler can reject the former and accept the latter.
I can load bits from memory that the programmer intended to be an address and treat it as a floating point value. No complaint from any assembly I know.
The types being checked by an assembler aren’t addresses vs doubles, they’re registers vs memory or register kind a vs register kind b.
add 1, 1
and
add r1, 1
isn’t different syntax, a naive assembler (for a uniform instruction width arch like mips) can go opcode, param, param and just spit out whatever you put in. It will even be meaningful for add 1, 1
it’s just that it will be the same as add r1, 1
and not the same as add [1], 1
.
r1 isn’t a variable named r1 with a type of machine word, it’s a literal value r1 with a type of register.
It is true that different assembly languages do this to different degrees, influenced by the architecture (which is why I said “many” and “especially load store architectures” in my original post).
Likewise even in x86 you can’t mov to an immediate value (mov 7, 0), you have to mov to a register or an address expression (mov [7], 0).
That’s a limitation of the x86. On the VAX the immediate addressing mode would very well work if code pages were writable. Still would be pointless, but how the immediate addressing mode works would allow it as a destination. Also interesting is that the VAX supports floating point operations, but they work on the general registers—there are no separate floating point registers on the VAX. And the program counter is just another general register. It’s a fun architecture.
This is not a limitation of the architecture but the syntax, like all static type checking. You could make an assembler that interpreted mov 7, 0
as equivalent to mov [7], 0
just like like you could have a variant of C that let you apply the dereference operator to ints or doubles.
This is not a limitation of the architecture but the syntax, like all static type checking.
Python does not have static type checking, yet it still has syntax. I think you are mixing up the two concepts.
I actually don’t think that’s true but someone who’s a Type Person can go ahead and answer that. My understanding is that dynamically typed languages would not really “count” as typed other than in the sense that all types exhibit “Any”. Everything is just a value, and those checks you’re referring to are the equivalent of “if 1 == 2”, not something like a type system that would involve a type checker.
See https://docs.python.org/3/reference/datamodel.html#objects-values-and-types.
Even in weakly typed languages, most programs involve types, and the programmer must think about them to write correct programs.
Interestingly, there are also types outside what most programs capture. Although you can definitely express units (e.g. meters, seconds, etc.) in most programming languages, many programs that deal with physical magnitudes do not encode units as types. But still, as a programmer you must get them right or your program will not work correctly!
What I’m saying is that the pedantic interpretation of type systems would not apply to dynamically typed languages, regardless of nomenclature like “TypeError” etc, which are purely at runtime and are value comparisons and do not involve actual type checking. Yes, you can encode all values as types as far as I know.
Regardless, someone who does more type theory stuff can weigh in. This is just my recollection.
No, that’s not correct. That’s why we differentiate between dynamical and statical typer systems.
Okay well I’ve heard at least some people say the opposite. So idk? It definitely doesn’t matter though
I guess sometimes “statical type system” is abbreviated with just “type system”. But it’s not precise.
Engineers are typically those that apply mathematics and scientific processes to a problem during the design, implementaiton, and operation of a system. This is, uh,… rare in “software engineering”.
I’m not sure that using mathematics and scientific processes in some kind of a rigorous way (for some threshold of rigor defined by the speaker) is what defines engineering.
Civil and industrial engineering, yes: it’s heavily regulated (all for very good reason, of course), all of the problems are well-known and well-studied, and so the rigor is simply the means to the end of provably meeting the safety targets and precisely solving the tradeoff between project costs and project requirements.
There is also a big chunk of engineering that does not need that much rigor, but which is still undeniably engineering.
All that said, I of course agree that rigor solves problems and we should strive towards it, but engineering is not defined by mathematical rigor — it is defined by solving practical, real-world problems using (the products of) applied sciences.
Engineering as a profession is defined by professionalism: formal, organized standards of conduct and responsibility, typically maintained by professional societies. This is more obvious in more heavily regulated fields, but it’s a legal and social standard of “rigor” that has nothing to do with math. Doctors, lawyers, professors, and architects are all professionals in this sense.
Civil, and structural engineering also used to be cowboy fields where credentials were not needed, nor even available. That was the 18th and 19th centuries.
Engineering as a profession is defined by professionalism
Sure, but this is a truism. *Any* profession is defined, among other things, by professionalism. However, I think we were talking about engineering as a field of human endeavor, not engineering as a profession.
(And even then, as you acknowledge, professionalism can also be achieved by means other than *mathematical* rigor.)
Is “software engineering” a profession, then? Or merely a “field of human endeavor”? I think that’s the distinction that matters here.
Both, obviously? There’s the profession of “such-and-such engineer”, and there’s software engineering as a field of human endeavor.
I don’t think that the “merely” and the scare quotes are warranted, either.
You seem to be trying to ignore the distinction. Perhaps you don’t see how it’s relevant to this topic of “vibe engineering”.
Would you go to a doctor who didn’t graduate from medical school, isn’t licensed by a medical board, doesn’t have malpractice insurance, and who will outsource diagnosis and prescription to an AI model that has read all the medical journals?
I have acknowledged the distinction multiple times. This is about as far from “ignore” as I can possibly get. As you now seem to be attacking me instead of what I wrote, I won’t engage further.
That’s OK. I’m not attacking anybody here, but you certainly don’t need to continue this conversation or the line of reasoning. Maybe it will be interesting to others.
But let’s think for a minute about what “software engineering” could mean, absent any coherent or enforceable professional standards. When we use the term “engineering” informally, like when we say “social engineering”, what are we saying?
Is that like saying Mom “doctored” me by putting a band-aid on my skinned knee and I “lawyered” her into using a special superhero bandaid? I’m not so sure, because these are analogies to activities of real professions that are distinct and widely recognized. What’s the target of the analogy when we say “software engineering” to suggest that we’re doing what engineers do? Which engineers are those, exactly? I suspect that when we use the big fuzzy “field of human endeavor” blanket, we’re just trying to borrow a little unearned respect.
Let’s face it. “Engineer” related to software development is just a vanity title. Can any non-trivial software application be “engineered” with any certainty as to specification, delivery or quality?
Outside the domains of computer and data sciences and computing infrastructure, is mathematics and science at the heart of an application or is it instead the modeling of a system known by “experts” outside of the development team?
Vibe Coding as a practice is here to stay. It works, and it solves real-world problems – getting you from zero to a working prototype in hours. Yet, at the moment, it isn’t suitable for building production-grade software.
It has always been possible to build prototypes from copy-pasted code. But a prototype is only as valuable as what was learned from building it. If I don’t learn anything, my prototype doesn’t prove much. Speaking for myself, I prefer to RTFM and dumpster dive through Github and SO myself rather than let an LLM do it for me. I learn more per watt that way.
When the sources are available, you can point to them, pick them apart, retrace your steps, compare different answers, and get a second opinion.
When you’re given a confident answer by a machine that doesn’t know how it knows what it (supposedly!) knows, or where it came from, and isn’t even capable of recognizing its own (equally confident) hallucinations… I guess you just roll the dice and ask again?
What you learn from a prototype is more generally “it can be done” rather than “how to do it”. The subsequent drafts or final versions often differ from the prototype in “how to do it”.
The purpose of a prototype is indeed to demonstrate that something can be done. Learning a bit about the “how” helps answer questions about, for example, failure modes, performance, and extensibility. If those questions aren’t answered, the prototype merely demonstrates that “it can be done,” where “it” is the prototype doing exactly what was demonstrated under the exact same conditions it was demonstrated.
This is why I value what was learned in the building of the prototype. The code is just an artifact. When a responsible engineer builds production code on the strength of what was demonstrated in a prototype and the production code turns out to be slow or has a particular bug, I expect them to have learned enough about the “how” to either fix those problems or at least point to an earlier disclaimer.
IME, never had a prototype be an accurate reference for failure modes, performance, or extensibility. All of those tend to change drastically when writing the serious version, given the prototype wasn’t written with those in mind. They’re then best used as a reference for functionality (does it do the thing) which can be the baseline for correctness.
But main point was that prototyping can be done however (even with AI) as long as it “does the thing”.
No developer of any appreciable experience would choose to live in a codebase built via vibes engineering.
I fielded Vibes PR reviews for a quarter and yearned for death.
We’ve always been vibe engineering. We are still not sure how reality itself is assembled at the quantum level. I think the problem of most engineers/developers, is that LLM are being used to convince normal people that they can now code. Kinda similar to the dumpster fire that happened a few years ago with these bootcamps. However, instead of anyone can learn to code, it’s anyone can code.
I have nothing against anyone learning to code and I do believe most people can learn to code. Should they, however, is another question. Many people joined the bootcamps out of financial need and not of love to computers. There is nothing wrong with that but they joined on false premises. This is not really different. If you want to see people vibe coding, go have a look at bolt discord/github groups. See how many people are complaining that their app is not working and that bolt has consumed their tokens. It is a waste of energy, time and money just so that a few Sam-like people can profiteer out of another hype.