At the risk of sounding pretty negative to someone who wants to make the world of EDA better, this really seems like a re-discovery of a text netlist with the idea of making it 'easier' to create schematics but I feel like it's DOA for anything non-trivial.
Most PCB tools have the user create the schematics graphically, but the actual 'design' is usually a text netlist already or the tools let you export the netlist to an industry format such as EDIF.
SPICE netlists (for example) are also purely text-based, and while many of my EE professors could actually "think in SPICE" (i.e. go from a schematic of a circuit to directly writing in SPICE) for me and pretty much everyone else it was unbelivably painful to modify a SPICE netlist as you went from a simple inverter to something even "basic" as a two-transistor BJT current-mirror. You want to now copy+paste the design six times? Now you have 6x netlist re-naming fun.
One of the reasons why non-trivial schematics are done graphically is you can more easily grasp the intent of the design and make "complex" designs very clear. In my opinion, it's a reason why a well designed block diagram is worth 10x access to the "design" whether it's RTL or C -- you get an understanding WHY someone did something not just the what.
Example, you might put resistors in-series on a bus to provide termination (best done at the source/driver) ; what would that look like in this new text-based schematic language? What if I have series termination resistors on the bus, thevenin termination on the far-end and then a pull-down strapping resistor as well? The block of text will grow quickly without much clarity. I don't doubt it will be functional, but there will be no mental model of what this is supposed to look like, and further why all of these components are there. A well drawn schematic* will make this overt.
Another simple example might be a single-supply op-amp which usually has a few resistors to bias the signal correctly. It will easily end up being MANY lines of text and it won't all together be clear which lines of text associated with the op-amp correspond to an intent.
I have about 20-years of doings electronics, FPGAs, boards, C/C++, SoC architecture and DevOps with a formal education in both HW+SW. I could be an example of "the old guard" but in my career I've usually been the one to champion newer/smarter better tools and applicable domain-specific-languages so much so I pivoted my career in this very area for a number of years.
*Most schematics, and reference designs are a jumbled mess of wires and symbols. While highly subjective, a good schematic breaks logical function into symbols by function, groups related components intelligently and employs a good number of notes on the schematic.
We're largely on board with the same problem's we're trying to tackle, and I most certainly understand why you're making these criticisms of these trivial examples and our current implementation. You're right to - these simple designs aren't where designing PCBAs with code will shine (we expect!).
Currently when you discuss configuring a regulator, it's beholden upon the designer to understand enough of the internals to configure it because, in our opinion, schematics aren't well suited to designing things that are configurable and plastic - either in topological terms, or in their parameters. Our hope isn't to abstract this complexity directly by using code, but rather because code allows the workflow itself to change, for a well tested and trusted configurable block to completely abstract the internals such that a designer can forget about it (like a tested piece of code). We want to bring configurability on this "trusted" scale in from the physical world of modules to the world of highly descriptive code. It's a tall order, I'll indeed admit!
It's also well worth nothing that while at the moment we're running lean on the visualisations, we do agree they're an extremely potent tool to convey the topology of a design at a glance. We expect we'll be adding a visualiser which should be used to gain familiarity with a circuit, diff it for review and understand it from a system level (by viewing topology by interface type etc...) - unlike current schematics that implicitly hold so much content via the positioning of components.
I don't know if your regulator response was directed at me or someone else, but using that as an example, if you take something like an LM7805/LM317 I think the 'parameterization' you're talking about (i.e. "input_voltage=12V", "output_voltage=3.3V", "accuracy=5%" --> let "tool" solve for resistors, and populate schematics/BOM) sounds cool and would work.
But what if it's something like this (a multi-phase step-down converter):
While it's "annoying" that a designer might have to go study a datasheet to know which resistors to tweak or change what's worse is having a tool or 'abstraction' do it for you, with the potential that it changes significantly underneath you and you don't even know what it did or why.
This is the bane of a lot of EDA tools and more specifically why people rightfully now loathe a lot of FPGA vendor's tools that have things like "abstract IP blocks" configured from the GUI but in each new release of the tool, the underlying "generated" RTL (which comes from some byzantine invokation of scripts and a soup of options) might be different and completely breaks people's designs.
What happens if it's not re-programmable like an FPGA but something physical with resistors, capacitors and traces on a circuit board?
For sure! You've made two super important points; how can I trust it shuffling things under me and how can I justify trusting this thing if it's going in long-lead or production hardware (stuff that's hard to fix)
The first one is super interesting technically because there's a few routes by which engineers can gain trust in something, and in my experience code-generation often isn't the strongest of them.
It's often the case that checking an answer is vastly easier than solving the problem in the first place. Our mid-term roadmap includes scripting tests (~pytest + SPICE in CI) and for a complex chip like that, I first expect that those params will tweak those tests, those tests will fail out of spec and the engineer needs to reconfigure it in that application while understanding the datasheet.
The next level is extremely similar, except the solver is selecting configuration component params based on cost-functions and rules, and this same test suite is validating the solution and providing the confidence needed to manufacture a prototype.
The final and important mechanism though is that you have a lockfile that lockdown the configuration of these discretes unless an engineer opens it up - yielding review as we'd have today.
The other half of this though - how can we confidently deploy this to production, is eased with the rigorous workflows software version control tools (github, gitlab) can enforce. If an engineer tweaks these params, you can use the tool to rigorously enforce review from the domain-specific-experts on the team.
Currently, it's vastly too easy to slip something through (mostly unwittingly) in design review meetings, and these version control tools go a long way towards fixing that.
Most PCB tools have the user create the schematics graphically, but the actual 'design' is usually a text netlist already or the tools let you export the netlist to an industry format such as EDIF.
SPICE netlists (for example) are also purely text-based, and while many of my EE professors could actually "think in SPICE" (i.e. go from a schematic of a circuit to directly writing in SPICE) for me and pretty much everyone else it was unbelivably painful to modify a SPICE netlist as you went from a simple inverter to something even "basic" as a two-transistor BJT current-mirror. You want to now copy+paste the design six times? Now you have 6x netlist re-naming fun.
One of the reasons why non-trivial schematics are done graphically is you can more easily grasp the intent of the design and make "complex" designs very clear. In my opinion, it's a reason why a well designed block diagram is worth 10x access to the "design" whether it's RTL or C -- you get an understanding WHY someone did something not just the what.
Example, you might put resistors in-series on a bus to provide termination (best done at the source/driver) ; what would that look like in this new text-based schematic language? What if I have series termination resistors on the bus, thevenin termination on the far-end and then a pull-down strapping resistor as well? The block of text will grow quickly without much clarity. I don't doubt it will be functional, but there will be no mental model of what this is supposed to look like, and further why all of these components are there. A well drawn schematic* will make this overt.
Another simple example might be a single-supply op-amp which usually has a few resistors to bias the signal correctly. It will easily end up being MANY lines of text and it won't all together be clear which lines of text associated with the op-amp correspond to an intent.
I have about 20-years of doings electronics, FPGAs, boards, C/C++, SoC architecture and DevOps with a formal education in both HW+SW. I could be an example of "the old guard" but in my career I've usually been the one to champion newer/smarter better tools and applicable domain-specific-languages so much so I pivoted my career in this very area for a number of years.
*Most schematics, and reference designs are a jumbled mess of wires and symbols. While highly subjective, a good schematic breaks logical function into symbols by function, groups related components intelligently and employs a good number of notes on the schematic.