nyrikki 3 days ago

The Ironic part being that the COBOL in that example is completely wrong and non-functional.

Note:

    WS-MENU-ITEM(1) OF WS-LABEL

When the syntax, which is similar to natural language is:

    elementary-var IN|OF group-var

It is a good example if you want to prove to people that they are a bad idea.

I prefer to use these tools as a form of red/green/refactor workflow, where I don't use them in the refactor step or the test case step.

  • hmottestad 3 days ago

    Another thing they are generally good at is writing Javadocs and comments. GPT-4 manages to return my code, intact, complete with Javadocs for all the methods I ask it for.

    • jiggawatts 3 days ago

      I’m picturing the docs it would generate for the code I see at government agencies written by $500/month outsourcers.

      “This function processes logins using a complex and unclear logic. Exceptions are not thrown and instead most failures are represent using a success code that sets the current logged in user to either null or a user object with a null or empty string as the name.”

      • hmottestad 3 days ago

        Thankfully my code is a bit simpler than that :P

        Here is an example:

          /**
          * Switches the current timeout settings to use the SPARQL-specific timeouts. 
          * This method should be called when making SPARQL SERVICE calls to apply 
          * shorter timeout values.
          *
          * <p>
          * The SPARQL-specific timeouts are shorter to ensure that unresponsive or slow 
          * SPARQL endpoints do not cause long delays in federated query processing. 
          * Quick detection of such issues improves the responsiveness and reliability
          * of SPARQL queries.
          * </p>
          */
          public void setDefaultSparqlServiceTimeouts() {...
fny 3 days ago

DSLs are not dead.

I have had the opposite experience. For complex tasks, LLMs fail in subtle ways which requires inspection of its output: essentially “declarative” to “imperative” is bug ridden.

My trick has been to create DSLs (I call them primitives) that are loaded as context before I make declarative incantations to the LLM.

These micro-languages reduce the error space dramatically and allow for more user-friendly and high-level interactions with the LLM.

  • vharuck 3 days ago

    TFA covers this (I think, it got real jargony at times):

    >Declarative processing of configurations generated via AI is a way to ground the AI, this requires a lot of work since you don't just offload requests to an AI but rather your processing logic serves as a guardrail to ensure what's being done makes sense. In order for AI to be used in applications that require reliability, this work will need to be done.

    When I was playing around with AI for data analysis last year, the best results for me came from something like this but more on the imperative side: RAG with snippets of R code. My first attempt was taking snippets directly from existing scripts and adding comments to explain the purpose and caveats. That didn't work well until I put in a lot of effort replacing the "common knowledge" parts with variables or functions. For example, no more `sex = 1`, but `sex = "male"`. Common tasks across scripts were refactored into one or a few functions with a couple parameters, and then placed in a package. The threshold for applying the DRY principle was lowered.

    In the end, I decided a custom solution wasn't worth the effort. The data had identifying details of people, so any generated code would have to be checked and run by analysts who already had access to the data. But the process of refactoring stuff into descriptively-named objects was such a big benefit, the AI code wasn't doing enough to justify the effort. Again, this was using a custom system made by a total ML noob (myself) with GPT 3.5. The execs banned usage of LLMs until they could deal with the policy and privacy concerns, so I don't know what's possible these days.

  • orochimaaru 3 days ago

    Bingo!!! I use this approach for data science tasks today. Create very specific DSL that has mathematics, set theory, etc. as a context and setup your data science exploration using the dsl as input. Been fairly decent so far. It works for two specific reasons 1. I have a fairly specific dsl that is expressive enough for the task at hand available easily (math notations have been around for centuries now and algorithms for at least 1/2 a century). 2. I use Apache spark - so everything around parallelizing and synchronizing I don’t handle in the code myself (not most of the time).

    • arslnjmn 3 days ago

      This is a fascinating topic and something I'm looking into these days, specifically removing need for Data Scientists to write Spark code. Would be great if you can share more details around the DSL. The DSL also sounds interesting in it's own right!

  • pknerd 3 days ago

    Pls talk more or write some blog post about this approach

  • arslnjmn 3 days ago

    This is a very interesting use case of LLM and something I'm looking into these days. Would appreciate if you could share more details on the challenges you ran into in using DSL's with LLM's and how you solved them?

  • dartos 3 days ago

    What do these DSLs look like, if you don’t mind sharing?

  • tomrod 3 days ago

    What a bright approach to this!

Elfener 3 days ago

My opinion on this whole "using LLMs as a programming language" thing is described nicely by this comic: https://www.commitstrip.com/en/2016/08/25/a-very-comprehensi...

> Do you know the industry term for a project specification that is comprehensive and precise enough to generate a program?

> Code, it's called code.

  • TeMPOraL 3 days ago

    That comic, while funny, is making a pretty weak argument (and basically reverses that other joke that the highest-level/sixth generation programming language is a graduate CS student).

    Obviously machines need code to execute. But humans don't need to write every line of it. Transitioning to using (future) LLMs as a programming language is transitioning from the role of a programmer to the role of a customer (or at least PM). Conversely, as a programmer (or technical manager), if your job is to extract a precise spec from a customer that doesn't know what they want, your job is exactly the one "LLMs as a programming language" are going to replace.

    • malkarouri 3 days ago

      The point is that explaining the requirements in a precise manner to LLM is literally coding the problem in a higher level language; the LLM is acting as a compiler for that precise description in English.

      I actually am sympathetic to your point about the value of LLMs in programming, but more from the perspective that LLMs can help us to do the precise description gradually and interactively in a much better way that a dumb REPL.

      • skydhash 3 days ago

        Have you interacted with a Common Lisp, Smalltalk, or Prolog REPL. That’s what programming should look like, but with other languages, you have to mostly visualize these interactions or do with the rougher edit-compile-run cycle.

        If you think your job as implementing only specifications (in form of Jira tickets), then maybe you don’t see the difference. But more often, you’re trying to define the problem in the first as the customers can only describe the current situation and needs. The job is to design a system that could satisfy these needs and going iteratively from natural language to code, removing ambiguity in the process. Stopping midway in the process and hoping an LLM can continue down is just playing slot machine with code. And then there’s the whole system evolution and maintenance.

        • Guthur 3 days ago

          And the tragedy is now we'll be allowing economic rent on something that has been free for decades but most people refused to use. /jaded lisp and prolog programmer

      • TeMPOraL 2 days ago

        That was kind of my point, too: no LLM will solve your problem zero-shot based on a vague description. Iteration will always be required. A good enough LLM will ask you clarifying questions, write prototypes, accept your feedback, etc., and as you iterate, you get closer and closer to a program you want without writing a line of code.

      • viraptor 3 days ago

        The "precise" part is not even necessary. In short, if 90% of the time guessing the intent works, then I may want to use that system anyway. The projects almost every one of us work with are underspecified anyway, but we have a decent intuition of what should happen (or we just don't care in many edge cases and let the default failure happen). We can read the result and say "is the good enough?" and iterate as you described.

marcus_holmes 3 days ago

Interesting that they used COBOL as the example language - COBOL was itself an attempt to do the same thing. It was thought that ordinary businessfolk could create programs without "learning to code" because COBOL is so close to English. That didn't work out well.

As any software dev knows, the problem is not the language. The problem is that the vague musings of an "ideas guy" cannot be turned into an actual program without a lot of guesswork, refinement, clarification, "filling-in", and outright lying. The required understanding and precision of description is just not that far from the understanding and precision required to write the code.

alexpetros 3 days ago

This is not declarative programming, it's codegen. Codegen has its place, but it does not have the most important property of declarative interfaces: that the implementation can improve without altering the code.

As others have pointed out, natural language is often insufficient to describe precisely the operations that you want. Declarative programming solves this with specialized syntax; AI codegen solves this by guessing at what you left out, and then giving you specific imperative code that may or may not do what you want. Personally, I'll be investing my time and resources into the former.

  • RodgerTheGreat 3 days ago

    On the other hand, if you use an LLM to generate code, all you have to do is change models, or adjust the model's temperature, or simply prompt the model a second time and you can expect the result to be teeming with a fresh batch of new flaws and surprising failure modes. An endless supply of debugging without the inconvenience of having to write a program to begin with!

bytebach 3 days ago

I recently has a consulting gig (medical informatics) that required English Declarative -> Imperative code. Direct code generation by the LLM turned out to be buggy so I added an intermediate DSL implemented in Prolog! The prompt described the Prolog predicates it had to work with and their semantics and the declarative goal. The resulting (highly accurate and bug free) Prolog code was then executed to generate the conventional (Groovy) imperative code that was then executed dynamically. In some hand-wavy way the logical constraints of using Prolog as an intermediate DSL seemed to keep the LLM on the straight and narrow.

  • rramadass 3 days ago

    This is great! I had been thinking of how to constrain and make the prompts precise so "modes of interpretation" of my prompt by the LLM could be limited. Something like how one would simplify one's language with a child to get it to comprehend and act within bounds.

    Would you mind sharing more details of your approach and setup?

    • bytebach 3 days ago

      Sure, the app selects matching patients based on demographics, disease, prior treatments and biomarkers. It also has to be able to express numeric constraints as well ('no more than two surgeries ..', 'at least one of the following biomarkers'). The following prompt sets up the Prolog toolkit the LLM is allowed to use. The generated Prolog conjunction is then run through a Prolog meta-interpreter that generates the matching code. Though it sounds long-winded it is more than fast enough to generate responses to user queries in an acceptable time:

      -----------------

        Now consider the following Prolog predicates:
        
          biomarker(Name, Status) where Status will be one of the following integers -
         
           Wildtype = 0
           Mutated = 1
           Methylated = 2
           Unmethylated = 3
           Amplified = 4
           Deleted = 5
           Positive = 6
           Negative = 7
         
          tumor(Name, Status) where Status will be one of the following integers if know else left unbound -
          
           Newly diagnosed = 1
           Recurrence = 2
           Metastasized = 3
           Progression = 4
          
          chemo(Name)
         
          surgery(Name)  Where Name may be an unbound variable
      
          other_treatment(Name)
      
          radiation(Name) Where Name may be an unbound variable
           
          Assume you are given predicate  atMost(T, N) where T is a compound term and N is an integer. 
          It will return true if the number of 'occurences' of T is less than or equal N else it will fail. 
         
          Assume you are given a predicate atLeastOneOf(L) where L is a list of compound terms. 
          It will succeed if at least one of the compound terms, when executed as a predicate returns true.
         
          Assume you are given a predicate age(Min, Max) which will return true if the patient's age is in between Min and Max.
           
          Assume you have a predicate not(T) which returns true if predicate T evaluates false and vice versa. 
          i.e. rather than '\\+ A' use not(A).
         
         Do not implement the above helper functions.
        
         VERY IMPORTANT: Use 'atLeastOneOf()' whenever you would otherwise use ';' to represent 'OR'.  
         i.e. rather than 'A ; B' use atLeastOneOf([A, B]).
          
        EXAMPLE INPUT: 
         Patient must have recurrent GBM, methylated MGMT and wildtype EGFR. Patient must not have mutated KRAS.  
        
        EXAMPLE OUTPUT:
          tumor('gbm', 2),
          biomarker('MGMT', 2),
          biomarker('EGFR', 0),
          not(biomarker('KRAS', 1))
        
        Express the following constraints as a Prolog conjunction. 
              Do not enclose the code in a code block. Return only the Prolog code - no commentary.
              Be careful to use only the supplied constraints, do not add any: 
              
              $constraint
      • YeGoblynQueenne 7 hours ago

            tumor('gbm', 2),
            biomarker('MGMT', 2),
            biomarker('EGFR', 0),
            not(biomarker('KRAS', 1))
        
        Note well: this is not valid Prolog code. If you put it in a file and consult it with a Prolog interpreter you'll get multiple errors.

        You could call it Prolog pseudocode but in that case you don't need Prolog-like notation. You can just state your "constraints" in natural language. Have you tried this?

      • rramadass 3 days ago

        Very Neat! Appreciate your sharing this.

        Just to be clear, your "EXAMPLE OUTPUT" is what is then fed to your prolog meta-interpreter to generate executable code in some other language (you mentioned Groovy) which is actually run i.e. answers the user query. Essentially then, a context is bounded by "pidgin Prolog" (i.e. Prolog+Natural Language) for the LLM and then user queries in Natural Language are submitted against it to generate valid Prolog code. This can be thought of as the logic/constraints of Prolog inference engine in the input modulating the interpretation/inference of the accompanying natural language by the LLM to keep it "on the straight and narrow" towards an accurate output.

        I was actually thinking of using "Structured English" (https://en.wikipedia.org/wiki/Structured_English) for this and maybe build a CASE Tool using LLMs for round-trip software engineering.

ingigauti 3 days ago

I've been developing a programming language that does this, the repo can be found here https://github.com/plangHQ

Here is a code example

  ReadFileAndUpdate
  - read file.txt in %content%
  - set %content.updated% as %now%
  - write %content% to file.txt
I call this intent based programming. There isn't a strict syntax (there are few rules), the developer creates a Goal (think function) and writes steps (start with -) to solve the goal

I've been using it for clients with very good results, and from the 9 months I've been able to build code using it, the experience has shown far less code needs to be written and you see the project from different perspective.

  • quantadev 3 days ago

    I like what you're doing there. It does seem like we might need some new kind of language to interface with LLMs. Sort of the language of prompt engineering, that's a bit more specific than just "raw English language" but also more powerful than just pure templating systems.

    • ingigauti 3 days ago

      Yeah, I don't believe LLM will be able to code fully, they are analog trying to do something digital where everything needs to be 100% correct.

      Plang being analog language I see the LLM is able to code so much more and it never has syntax, library or other build errors.

      • quantadev 3 days ago

        But we have to admit also that LLMs may become (or maybe even OpenAI-01 already is) smart enough that they can not only write the code to solve some task but understand the task well enough to also be able to write even better Unit Tests than humans ever could. Once AI starts writing Unit Tests (internally even) for everything it spits out we can probably say humans will at that point be truly obsolete for writing apps. However, even then, the LLM output will still need to be computer code, rather than just having the LLMs just "interpret" English language all the time to "run" apps.

        • skydhash 3 days ago

          Ever heard of the halting problem [0]? Every time I heard these claims, it sounds like someone saying that we can travel in time as soon as we invent a faster than light vessel, or better, Dr Who’s cabin. There’s a whole set of theorems that says ultimately how a formal system (which computers are) can’t be completely automated as there are classes of problems it can’t solve. Anything the LLMs do, you can write a better performing software except for the task that it is best suited for: translation between natural languages. And the latter, it’s because it’s a pain to write all the rules.

          [0]: https://en.wikipedia.org/wiki/Halting_problem

          • quantadev 3 days ago

            LLMs are doing genuine reasoning already (and no I don't mean consciousness or qualia), and they were even since GPT3.5.

            They can already take descriptions of tasks and write computer programs to do those tasks, because they have a genuine understanding of the tasks (again no qualia implied).

            I never said there are no limits to what LLMs can do, or no limits to what logic can prove, or even no limits to what humans can understand. Everything has limits.

            EDIT: And before you accuse me of saying LLMs can understand all tasks, go back and re-read the post a second time, so you don't make that mistake again.

  • r0ze-at-hn 3 days ago

    Hmm, couldn't that example be simplified to:

      SetUpdatedToNow
      - set %content.updated% as %now% in the file "file.txt"
    
    The whole reading and writing feels like a leftover from the days of programming. Reading it in, modifying and then writing assume it fits in memory, leaves out ideas around locking, filesystem issues, writing to a temp and swapping, etc. Giving the actual intent lets the LLM decide what is the best way which might be reading in , modifying and then writing.
    • ingigauti 3 days ago

      The way I designed the language I had the current languages in mind. Don't forget you are programming, it's just more natural, you need details

      But also the main reason is that it's much more difficult to solve the intent when mixing multiple action into the same step. In theory it's possible but the language isn't there yet.

yu3zhou4 3 days ago

That’s a fascinating direction to explore. It turns out that translating instructions into AI/ML tasks and wrapping it with Python code is easy to build [0]. It starts with LLM decididing what type of task should be performed, then there’s a search over Hugging Face catalog, inference on a model (picked by heuristics) with Inference Endpoints and parsing the output to most relevant Python type [1]

[0] https://github.com/jmaczan/text-to-ml

[1] Page four - https://pagedout.institute/download/PagedOut_004_beta1.pdf#p...

  • probably_wrong 3 days ago

    I can't tell whether you're being honest or not. 10 years ago this was a literal joke [1] that people would tell and implement as "look at this terrible idea". Nowadays I can't tell anymore.

    [1] https://gkoberger.github.io/stacksort/

    • yu3zhou4 3 days ago

      I coded this proof of concept to show how easy it is to have AI mingled with a regular Python code. I didn't know it's considered a terrible idea xd

unconed 2 days ago

Declarative programming has a specific meaning, which is that you declare the contents of the end state, and allow the code/run-time to compute the work necessary to reach that end state. This can include synchronizing changes, scheduling transition animations, and so on. It's about avoiding O(n^2) lines of code to manage O(n^2) possible state transitions.

The idea that "I tell the AI what I want, and it writes the code for me!" is "declarative programming" is so wrong it's almost comical... except that it's yet another instance of the LLM-idiots confusing their lack of knowledge with not needing to know or understand it. In the process the term "declarative programming" as a concept may end up flushed down the shitter.

agentultra 3 days ago

> Not only does AI eliminate the need for developing a custom DSL, as your DSL now is just plain language

Plain language is not precise enough.

danielvaughn 3 days ago

A while ago I created a very simple AI tool that lets you write any kind of pseudocode you like, and then choose a language to convert it into. I didn't do much with it, but I like that style better because at least you can verify and correct the output.

For instance:

  // pseudocode input
  fizzBuzz(count)
    for each i in count
      if divisible by 3, print 'fizz'
      if divisible by 5, print 'buzz'
      if both, print 'fizz buzz'

  // rust output
  fn fizz_buzz(count: i32) {
    for i in 1..=count {
      match (i % 3, i % 5) {
        (0, 0) => println!("fizz buzz"),
        (0, _) => println!("fizz"),
        (_, 0) => println!("buzz"),
        _ => println!("{}", i),
      }
    }
  }
  • Aeglaecia 3 days ago

    in general i agree with the other guy that said using llms like this is codegen ... but you have given a case agreeing with another perspective in the thread, that using llms like this is akin to treating them as a compiler from spoken language to code ...

RevEng 2 days ago

I would make one small change to the author's analogies of imperative versus declarative and one small change to their description of LLMs for declarative programming.

First, on imperative versus declarative. I would describe imperative as "giving a list of instructions to follow". The words "instruction" and "direction" are largely synonyms in my mind and the difference may be subtler than the original words they are trying to describe. Instead, I would say that declarative programming gives "a goal and a set of constraints". We describe what we want, not how to get it. A large part of describing what we want is by describing what we don't want or can't do.

On using LLMs for declarative programming, I assert that we already do this. Prompt engineering is all about defining a set of constraints on the LLMs response. The goal is often within the system prompt: answer the user's question given the context. The user's request is just one of many constraints on the answer.

This declaration in the form of constraints is a direct result of the fact that LLMs operate on conditional probabilities. An LLM chooses each token by taking the list of all possible tokens and their a priori probabilities and conditioning those on the tokens that preceded it. By prefacing the generated output with a list of tokens describing constraints, we condition the LLMs generation to fit those constraints. The generated text is the result of applying the constraints to the space of all possible outcomes.

As we know, this isn't perfect. Most declarative languages and their engines use strict logic to limit the generated solutions, whereas LLMs are probabilistic. The constraints aren't specified in concrete terms but as a set of arbitrary tokens whose influence on the generated output is based on frequency of occurrence within a corpus of text rather than any logical rules.

Still, the fact that the generated output is the result of conditioning based on a set of tokens provided by the user means that it uses constraints to determine an outcome that fits those constraints, which is exactly how we solve a problem based on a declarative description.

__loam 3 days ago

I strongly believe that using these systems as a programming interface is a very bad pattern. They are the ultimate leaky abstraction.

  • skydhash 3 days ago

    Both imperative and declarative programming require an understanding of the domain you’re trying to code a solution in. Once you’ve understand the model, the DSL makes a lot more sense. I strongly believe that people who are hoping for these no-code tools don’t care to understand the domain, or why it’s formal representation as a DSL is necessary. What makes natural language great is the ability for humans to create a shared model of understanding that aims to eliminate ambiguity. And even then, there are issues. Formalism is what solve these issues, not piling on more random factors.

    • notarobot123 2 days ago

      > What makes natural language great is the ability for humans to create a shared model of understanding that aims to eliminate ambiguity.

      I'd say that natural language enables shared understanding because it *allows* ambiguity. We can construct abstractions and find analogs to put them into words. We expect the hearer to reconstruct those abstractions from a different starting point and by referring to a different set of experiences. The potential for ambiguity is a necessary part of that process. We get closer to a reliably shared construction by layering our analogs and testing our respective mental models against each other.

      Formalism and DSLs definitely do a better job by giving us an initial set of shared meanings to work from. An LLM might be able to wrap a fuzzy interface around a DSL but I agree that sharing a common semantic framework without fuzzy mediation might be a much better idea.

    • quantadev 3 days ago

      It's true that no-code tools have mostly not been that successful in the past (except in very limited circumstances), because eventually you run into cases where it would've just been easier to just write some code than to try to finagle the no-code constructs into doing something it wasn't really designed to support. Often the most compact way to specify something is actually just they Python code itself, for example.

weeksie 3 days ago

Software specification documents will, as they say, rise in status. The kind of specification outlined in this article misses the mark–why would we use anything but actual natural language? That said, there will be real returns to structuring specifications so that they are actionable and easy to navigate to achieve context, both kinds.

  • qsort 3 days ago

    > why would we use anything but actual natural language?

    Because natural language is not a good tool to describe computational processes.

    Which one would you rather write:

    (a) p(x) = x^2 + 5x + 4

    (b) Let us consider the object in the ring of univariate polynomials with complex coefficients defined by the square of the variable, plus five times the variable plus four.

    Every scientific discipline moves away from natural language as soon as possible. Low-code, no-code and the likes have been a dismal failure precisely for this reason. Why would I move back to natural language if I can effectively express my problem in a formal language that I can manipulate at-will?

    • weeksie 3 days ago

      I am not convinced by the utility in this case. I'm well aware of low code and no code problems, but I am less convinced that the same principles apply to LLM code generation.

      By all means use some formal language to describe LLM capabilities and so forth, but the most fantastic thing about using LLMs is that you can convey the why along with the what and get better results and the "why" does not lend itself to expression in formalized notation.

    • handfuloflight 3 days ago

      > Every scientific discipline moves away from natural language as soon as possible.

      Have you seen a scientific paper that only had mathematics?

      Natural language is still necessary for scaffolding, exposition, contextualization.

      • skydhash 3 days ago

        Mathematics is not the only formal languages. Every profession soon invents its own jargon because natural language are too ambiguous. For some that’s enough. But science require more formalism.

        Boole’s Laws of Thought or Church’s The Calculi of Lambda-Conversion are mostly describing how to be so precise that the description of the problem equates its solution. But formal languages have their own issues.

frozenlettuce 3 days ago

I'm experimenting with something like that, to allow creating a web API from some descriptions in markdown. https://github.com/lfarroco/verbo-lang

  • frozenlettuce 3 days ago

    the initial idea was a general-purpose language, but obviously the scope for that would be too big. I think that having "natural language frameworks" for some application types can work: REST APIs, CLI apps, React components... . If you have a set architecture, like the Elm architecture, centered around events being fired that update some state, that could lead to some standards. One feature that I intend to add is having an "interview" with the AI. You write the spec, then it reads it and gives you some questions/issues, like: "this point is too vague", "what do you want to update here? a or b?". That would help ensure that the prompt/spec itself is improvable with AI.

    People say that a microservice is something that "fits in your head". Once the logic gets too complex, you should separate it into another service. Maybe in the future the phrase will be "what fits in the AI's context". That would be a good litmus test: if a piece of software is hard for an AI, maybe we should simplify it - because it is probably too hard for the average human as well.

eigenspace 3 days ago

It's amazing how despite the fact that LLMs can be really useful and transformative for certain things, people like this insist on trying to convince you that they're useful and transformative for something that they're simply shit at doing.

fullstackchris 3 days ago

While interesting, this still can't account for domain expertise and system design decisions - you can't assume every character / line / function / method typed is just "correct" and exactly what you'll need. There are 1000s of ways to do both the wrong and right thing in software.

The real problem always comes back to the fact that the LLM cant just make code appear out of nowhere, it needs _your_ prompt (or at least code in the context window) to know what code to write. If you can't exactly describe the requirements - or what is increasingly happening - _know_ the actual technical descriptions for what you are trying to accomplish, its kinda like having a giant hammer with no nail to hit. I'm worried of a sort of future where we sort of program ourselves into a circle, all programs starting to look the same simply because the original "hardcore" or "forgotten" patterns and strategies of software design "just don't need to be taught anymore". In other words, people getting things to work but having no idea how they work. Yes I get the whole "most people dont know how cars work but use them", but as a software engineer not really knowing how the actual source code itself works? It feels strange and probably ultimately the wrong direction.

I also think the entire idea of a fully automated feature build / test / deploy AI system is just impossible... the complexity of such a landscape is just too large to automate with some sort of token generator. AGI, if course, but LLMs are so far from AGI it's laughable.

imtringued a day ago

Interesting idea. This could be used to create new datasets and benchmarks, but I doubt that anyone is going to use this to build production ready apps.

peterweyand38 3 days ago

[flagged]

  • meiraleal 3 days ago

    > People are too stupid to be helped.

    We can see that. Need help?

    • peterweyand38 3 days ago

      Funny. I'm being poisoned and harassed to the point where it doesn't matter. It's called gaslighting. Where's the last homeless person you did this to until they complained all the time? I remember them posting on hackernews. What happened to them? Are they dead?

      I need a place to live where I can sleep and eat without being poisoned. I was gassed with drugs by the public and city officials coordinated to have it happened. I'm being poisoned so I have constant headaches and then having strangers follow me around and sniff at me and scream in my ear.

      Can you find me a safe place to live and clean food to eat? I don't trust the shelter I'm in or city officials. Is that too stupid to be helped? Or are you powerless to help someone?

      I spend the entirety of my day emailing as many people as I possibly can to warn them away from coming to San Francisco.

      That's all of academia. Anyone powerful and famous I can think of. Anyone with any influence. Over every email platform I can think of while rotating addresses.

      And provide pictures of abandoned buildings. And people stalking me in public. People here are sick.

      You live in a ruins and you hurt and poison people you don't like with vigilante mobs and drugs. And so everyone that can leaves. Try and go to a doctor with an illness and see if you can "no really actually" your way into competent medical care.

      Want I should post the picture of people dumping all their fentanyl in the sewer again with the embossed logo of the city of San Francisco on the grate? That's "funny" too.

      I wouldn't be so cruel and mean were I not being poisoned and in constant pain.

      (Edit - cute trick about making it so that what I type has errors in it when I post it so I have to go back and edit it. Happens in my emails too because my phone is bugged. And then when I find all the errors and correct them some homeless guy grunts or some asshole in public "no actually reallys". Christ you're all so fucking ignorant and evil. Oh look I said Christ by all means send the religious loonies after me now. I wonder if the guy who cleans the fentanyl out of your water supply cares that he can't go to the doctors because they're all sick. But that's cool. You're good at programming a phone.)