https://twitter.com/iopeak/status/1283223085894656000?s=20

I've often stood against Siri/Alexa and other NLP engines due to the following situation: you request something, and the voice assistant does not recognize it or misinterprets it, you lose trust in the system. For this reason, I've always been against no-code movements too, since they often prefer ambiguous natural-speak statements over code-like statements. Of course, a perfect system that mirrors the way people think would be ideal for tasks but NLP algorithms are not there yet.

Steve from Storyscript gets around this problem but having the UI provide subtle feedbacks/suggestions to the user as they type. The suggestions (obviously) are the commands the system understands. This sort of feedback is not provided by voice assistants (vocal feedback is much more clunky), but visual feedback is easily understood by people. This all goes back to how User interfaces and users are a two way information transfer.

This system is pretty unique...in a way it mostly is code, except there are multiple ways to do the same thing. In code, if you want a for loop, you use a for loop. In the Storyscript system, you can use the same for loop saying For items in this or Loop over these or As this happens, etc. The benefits of this seem to be that it's easier for a user who may not know what a for loop is to ask for it in the way familiar to them.


As easy to use as the interface can be, on the other end of the spectrum is power. Since the Storyscript model of writing algorithms matches the ambiguous thought of a person, it is also limited by the same. With rigid code, one person can build a very complex system by effectively isolating and working on separate tasks. If Storyscript could provide the same, it should allow writing "functions" and then assigning them natural descriptions. For example an isolated task to send messages to your team about something would be

send mark an email about this
send emily an email about this
send a slack message in the team chat about this

and then you could assign that message the team about this (and it expects an input).



@Steve Peak comments and additional thoughts.

Context-aware interoperability is key when thinking about data and workflows. MercuryOS provided a glimpse into a different paradigm that is contextually aware of the "flow" (as MercuryOS put it). This is very true for resolving ambiguity. The more context the computer has the more reliable the suggestions.

“Automatically finding a program …that satisfies the user intent expressed in the form of some specification. Since the inception of AI in the 1950s, this problem has been considered the holy grail of Computer Science.” — Program Synthesis by Microsoft research team.

Dialog-driven development is a new term we use to describe the relationship humans have with the computer. Today, most if not all our interaction is mono-directional, meaning user is always driving the task as the computer is more along for the ride. In a bi-directional, it's a dialog, a two way conversation, where the computer is providing data, suggestions, and resolving ambiguity while the user is providing more ambiguity and ideas. This delicate dance between human and computer has not yet been seen in the wild, at least I have not seen it outside the demos we have created. More on this topic has been explored and will emerge in demonstrations in the near future.

It's not code. Yes, Storyscript is not code. It's not compiled, interpreted, translated, transformed, it's not plain-text. So what is it? Well, behind the hood is a program (as one would expect) and our product represents that program in multiple dimensions: text, visuals, WSYWIGS, charts, etc. Mostly text for logic and mostly visuals for, well, visual stuff. The resulting text is only a representation of the program, not the source of truth. This is polar opposite to traditional programming languages where the plain-text, while fully representing the truth, is tokenized and compiled into lower-level systems (generally speaking). This orthodox strategy is unquestionably ubiquitous in the programming industry resulting in some confusion when people look at Storyscript and think it's "code"... it's not. Here is one trick our tool can do that traditional programmings languages cannot: collaborating in multiple human languages. Because we represent the program we can choose many ways to represent it; for example in the natively spoken language of the user. Most languages have different sentence structures than English, so when Storyscript is presented to a user, it may be read in a structure more familiar; therefore the grammar can change visual positions while maintaining a consistent state of truth. We also blend in visuals and interactive tools in the middle of text: instead of email to:eric from:john subject:hello body:... you would get a UX that looks more "email"-like in the middle of your textual based logic.