{"id":465,"date":"2021-03-19T21:37:11","date_gmt":"2021-03-19T20:37:11","guid":{"rendered":"https:\/\/www.nikostotz.de\/blog\/?p=465"},"modified":"2021-03-21T13:10:09","modified_gmt":"2021-03-21T12:10:09","slug":"xtext-vs-mps-decision-criteria","status":"publish","type":"post","link":"https:\/\/www.nikostotz.de\/blog\/xtext-vs-mps-decision-criteria\/","title":{"rendered":"Xtext vs. MPS: Decision Criteria"},"content":{"rendered":"<p><strong>tl;dr<\/strong> If we started a new domain-specific language tomorrow, we could choose between different language workbenches or, more general, textual vs. structural \/ projectional systems. We should decide case-by-case, guided by the criteria targeted user group, tool environment, language properties, input type, environment, model-to-model and model-to-text transformations, extensibility, theory, model evolution, language test support, and longevity.<br \/>\n<!--more--><br \/>\nThis post is based on a presentation and discussion we had at the <a href=\"https:\/\/strumenta.community\/\">Strumenta Community<\/a>. You can <a href=\"https:\/\/nikostotz.de\/xtext-vs-mps.pdf\">download the slides<\/a>, although reading on might be a bit more clear on the details. Special thanks to <em>Eelco Visser<\/em> for his contributions regarding language workbenches besides Xtext and MPS.<\/p>\n<h2 id=\"_introduction\">Introduction<\/h2>\n<p>This whole post wants to answer the question:<\/p>\n<blockquote><p>Tomorrow I want to start a new domain-specific language.<br \/>\nWhich criteria shall I think about to decide on a language workbench?<\/p><\/blockquote>\n<p>The most important, and most useless answer to this question is: &#8220;It depends.&#8221; Every language workbench has its own strengths and weaknesses, and we should assess them anew for each language or project. All criteria mentioned below are worth consideration, and should be balanced towards the needs of the language or project at hand.<\/p>\n<p>Almost every aspect described below <em>can<\/em> be realized in any language workbench\u2009\u2014\u2009if we really wanted to torture ourselves, we could write an ASCII-art text DSL to &#8220;draw&#8221; diagrams, or force a really complex piece of procedural logic into lines and boxes. On the other hand, an existing text-based processing chain integrates rather well with a textual DSL, and tables work nicely in a structured environment.<\/p>\n<p>I personally only know Xtext and MPS good enough to offer an educated opinion; Thankfully, during the presentation several others chimed in to offer additional insights. Thus, we can extend this post\u2019s content (to some degree) to &#8220;Textual vs. Structural: Decision Criteria&#8221;.<\/p>\n<div style=\"background-color: lightgray; padding: 1em;\">\n<h2>What do we mean with <em>textual<\/em> and <em>structural<\/em> language workbenches?<\/h2>\n<p>As a loose distinction, we\u2019re using the rule of thumb &#8220;If you directly edit what\u2019s written on disk, it\u2019s textual.&#8221;<\/p>\n<p><em>Structural<\/em> describes both <em>projectional<\/em> and <em>graphical<\/em> systems. In <em>projectional<\/em> systems, the user has no influence on how things are shown; with <em>structural<\/em> systems, the user may have some influence\u2009\u2014\u2009think of manually layouting a diagram (thanks to <em>Jos Warmer<\/em> for this clarification).<\/p>\n<p>Examples of <em>textual<\/em> systems include<\/p>\n<ul>\n<li>ANTLR<\/li>\n<li>MontiCore<\/li>\n<li>Racket<\/li>\n<li>Rascal<\/li>\n<li>Spoofax<\/li>\n<li>Xtext<\/li>\n<\/ul>\n<p>Examples of <em>structural<\/em> systems are<\/p>\n<ul>\n<li>MetaEdit+<\/li>\n<li>MPS<\/li>\n<li>Sirius<\/li>\n<\/ul>\n<\/div>\n<h2 id=\"_targeted_user_group\">Targeted User Group<\/h2>\n<p>If our DSL targeted developers, we might go for a textual system. Developers are used to the powerful tools provided by a good editor or an IDE, and expect this kind of support for handling their &#8220;source code&#8221;\u2009\u2014\u2009or, in this case, model. Textual systems might integrate better with their other tools.<\/p>\n<p>If we targeted business users, they might prefer a structural system. The main competitor in this field is Excel with hand-crafted validation rules and obscure VBA-scripts attached. Typically, business users can profit more from projectional features like mixing text, tables and diagrams.<\/p>\n<h2 id=\"_tool_environment\">Tool Environment<\/h2>\n<p>If our client had an existing infrastructure to deploy Eclipse-based tooling, we probably wanted to leverage that. This implies using an Eclipse-based language workbench like Rascal, Sirius or Xtext. If we wanted model integration with existing tools, EMF would be our best bet, pointing towards Eclipse.<\/p>\n<p>If our client already leaned towards IntelliJ or similar systems, MPS would be more familiar with them. Spoofax supports both Eclipse and IntelliJ.<\/p>\n<h2 id=\"_language_properties\">Language Properties<\/h2>\n<p>If (parts of) our DSL had an established text-based language, we wanted to reuse this existing knowledge in our users and provide a similar textual language. Textual syntax often provides aids to parsers that are difficult to reproduce fluently in structural systems.<\/p>\n<p>As an example, think of a C-style if-statement. In text, the user types <tt>i<\/tt>, <tt>f<\/tt>, maybe a space, and <tt>(<\/tt> without even thinking about it. In a projectional editor, she still types <tt>i<\/tt> and <tt>f<\/tt>, but the parenthesis is probably automatically added by the projection.<\/p>\n<pre>\/\/ | denotes cursor position\nif (|\u00abcondition\u00bb) {\n  \u00abstatements\u00bb\n}<\/pre>\n<p>If she typed <tt>(<\/tt>, we would have two bad choices: either we add the parenthesis inside the condition, which is probably not what the user wanted in 95 % of the cases; or we ignore the parenthesis, making the other 5 % really hard to enter.<\/p>\n<p>One important language property is whether we can parse it with reasonable effort and accuracy. For more traditional systems like ANTLR and Xtext, we reach the threshold of unparsable input rather quickly. More advanced systems like Spoofax and Rascal can handle ambiguities well. However, as an extreme example, I doubt we could ever have a parser that reconstructs the semantics of an ASCII-art UML diagram. More realistically, it might be pretty hard for a parser to distinguish mixed free text with unmarked references\u2009\u2014\u2009think of a free text with some syntactically unmarked references to a user-defined ontology sprinkled in the text: <em>This is free text, with <strong>Ornithopter<\/strong>s or other <strong>Dune<\/strong> references<\/em>.<\/p>\n<p>Other structures might be parsable, but are very cumbersome to enter\u2009\u2014\u2009I have yet to see a textual language where writing tables is less than annoying.<\/p>\n<p>Related to parseability is language integration. Almost all technical languages use traditional parser systems, leading to the joy of escaping: <tt>&lt;span onclick=\"if(myVar.substr(\\\"\\\\'\\\") &amp;lt; 5) myTag.style = \\'.header &amp;gt; ul { font-weight: bold; } \\'\"&gt;<\/tt>. More modern languages aren\u2019t that pedantic, but try to write the previous sentence in markdown \u2026\u200b<\/p>\n<p>If we wanted to integrate non-textual content or languages in a textual system, it gets tricky pretty soon. In fact, we had to solve a lot of the problems projectional editors face. As an example, think of the parameter info many IDEs can project in the source code: The Java file contains <tt>myObj.myFunc(\"Niko\", false)<\/tt>, but the IDE displays <tt>myObj.myFunc(<em>name:<\/em> \"Niko\", <em>authorized:<\/em> false)<\/tt>. If the cursor was just right to the opening parenthesis, and we pressed right arrow, would we move to the left or right of the double quotes? What if the user could interact with the projected part, e.g. a color selector? These examples are <em>projected<\/em> mix-ins, but it doesn\u2019t get better at all if we imagined the file contents <tt>&lt;img src=\"data:image\/png;base64,iVBORw \u2026\u200b\"\/&gt;<\/tt>, and wanted to display an inline pixel editor. The aforementioned table embedded into some text is another example.<\/p>\n<p>Structural systems really shine if we wanted to have different editors for the same content, or different viewpoints on the content. To illustrate different editors for the same content, think of a state machine. If we wanted to discuss it with our colleagues, it should be presented in the well-known lines-and-boxes form. We still wanted to retarget a transition or add a state graphically. However, if we had to write it from scratch and had a good structure in mind, or just wanted to refactor an existing one, a text-like representation would be much more efficient.<\/p>\n<p>Different view points can be as simple as &#8220;more or less detail&#8221;: in a component model, we might want to see only the connections between components, or also their internal wiring. Textual editors can also hide parts of the content\u2009\u2014\u2009most IDEs, by default, fold the legal header comment in a source code file.<br \/>\nAs an example of <em>different<\/em> viewpoints, imagine a complex model of a machine that integrates mechanical, electrical, and cost aspects. All of these are interconnected, so the integrated model is very valuable. Hardly anybody would like to see all the details. However, different users would be interested in different combinations: the safety engineer needs to know about currents and moving parts, and the production planner wants to look at costs and parts that are hard to get. In a textual system, we could create reports with such contents, but had to accept serious limitations if we wanted all the viewpoints to be editable (e.g. a complex distribution to different files + projection into a different file).<\/p>\n<h2 id=\"_input_type\">Input Type<\/h2>\n<p>A blank slate can be unsuitable for some types of users and input. If we wanted the user to provide very specific data, we would offer them a form or a wizard. These are very simple <em>structured<\/em> systems. A state machine DSL provides the user with much more flexibility, but enforces some structure\u2009\u2014\u2009we can\u2019t point a transition to another transition, only to a state. In a structured implementation of this DSL, the user would just not be able to create such an invalid transition; a textual DSL would allow to write it, but mark it as erroneous. If our users were developers, they would be used to starting with an empty window, entering the right syntax, and handling error messages. If we targeted people mostly dealing with forms, they might be scared by the empty window, or would not know how to fix the error reported by the system. (&#8220;Scared&#8221; might sound funny, but there\u2019s quite some anecdotal evidence.) In a structural system, developers might be really annoyed that they have 15 very similar states with only one transition each, but still have to write them as separate multi-line blocks; they felt limited by the rigid structure. For the other group, we could project explanatory texts, and visually separate scaffolding from places where they should enter something; they felt guided by pre-existing structure.<\/p>\n<p>To some degree we can adjust our language design to the appropriate level of flexibility. If we implemented a OO-class like system, we could either allow class content in arbitrary order, or (by grammar \/ language definition) enforce to first write constructors, then attributes, then public methods, and private methods only at the end.<\/p>\n<h2 id=\"_environment\">Environment<\/h2>\n<p>Textual systems have been around for a long time, so we know how to integrate them with other systems. Any workflow system can move text files around, and every versioning system can store, merge and diff such files. We understand perfectly how to handle them as build artifacts, and can inspect them on any system with a simple text editor. The Language Server Protocol provides an established technology to use textual languages in web context.<\/p>\n<p>Any such integration is more complicated with structural systems. It might store its contents in XML or binary, thus we require specific support for version control. As of now (March 2021), I\u2019m not aware of a production-quality structural language workbench based on web technology. I hope this will change within next year.<\/p>\n<p>On the other hand, if our project does not require tight external integration and targets a desktop environment, a system like MPS provides lots of tooling out-of-the box that\u2019s well integrated with each other.<\/p>\n<h2 id=\"_transformations_model_to_model\">Transformations: Model-to-Model<\/h2>\n<p>The main distinction for this criteria is between EMF-enabled systems, and others. Our chances to leverage existing transformation technologies, or re-use existing transformations, were pretty good in an EMF ecosystem. EMF provides a very powerful common platform, and a plethora of tooling (both industrial and academic) is available.<\/p>\n<p>Two very strong suits of MPS are intermediate languages, and extensible transformations. EMF provides frameworks to lock several model-to-model transformations into a chain, but it still requires quite some manual work and plumbing. In MPS, this approach is used extensively both by MPS itself, and most of the more complex custom languages I know of. The tool support is excellent; for example, it takes literally one click to inspect all intermediate models of a transformation chain.<\/p>\n<p>Every model-to-model transformation in MPS can be extended by other transformations. It depends on language and transformation design how feasible a specific extension is in practice, but it is used a lot in real-world systems.<\/p>\n<h2 id=\"_transformations_model_to_text\">Transformations: Model-to-Text<\/h2>\n<p>Tightly controlling the output of a model-to-text transformation tends to be easier in textual systems. On the one hand, it\u2019s doable to maintain the formatting (i.e. white space, indentation, newlines) of some part of the input. On the other hand, the system is usually designed to output arbitrary text, so we can tweak it as required. Xtend integrates very nice with Xtext (or any other EMF-based system), and provides superior support for model-to-text transformation: It natively supports polymorphic dispatch, and allows to indent generation templates by both the template <em>and<\/em> the output structure, with a clear way to tell them apart.<\/p>\n<p>If we didn\u2019t need, or even wanted to prevent, customization of the output, structural systems could be helpful. The final text is structured by the transformation, or post-processed by a pretty printer.<\/p>\n<p>For MPS, we need to consider whether the output format is available as a language. In this case, we use a chain of model-to-model transformations and have the final model take care of the text output, which usually is very close to the model. Java and XML languages are shipped with MPS, C, JSON, partial C++, partial C#, and others are available from the community.<\/p>\n<h2 id=\"_extensibility\">Extensibility<\/h2>\n<p>Xtext assumes a closed world, whereas MPS assumes an open world. Thus, if we wanted to tightly control our DSL environment, we have very little effort with Xtext. Using MPS in a controlled environment requires a lot of work.<\/p>\n<p>On the other hand, if our DSL served as an open platform, MPS inherently offers any kind of extensibility we could wish for. We had to explicitly design each required extension point in Xtext.<\/p>\n<h2 id=\"_conceptual_framework_theory\">Conceptual Framework \/ Theory<\/h2>\n<p>Parsers and related text-processing tools are well-researched since the 1970s, and continues to move forward. Computer science build up solid theoretical understanding of the problem and available solutions. We can find several comparable, stable and usable implementations for any major approach.<\/p>\n<p>Structural systems are a niche topic in computer science; Eelco <a href=\"https:\/\/d.strumenta.community\/t\/academic-research-on-structured-editors\/1148\">provided some pointers<\/a>. We don\u2019t understand structural editors well enough to come up with sensible, objective ways to compare them. All usable implementations I know of are proprietary (although often Open Source).<\/p>\n<h2 id=\"_scalability\">Scalability<\/h2>\n<p>As parsers are around for a long time, we understand pretty well how they can be tuned. They are widely used, so there\u2019s a lot of experience available how to design a language to be efficiently parsable. Xtext has been used in production with gigabyte-sized models. The same experience provides us with very performant editors. I\u2019d expect a textual system to fail more graceful if we closed in on its limits: loading, purely displaying the content, syntax highlighting, folding, navigation, validation, and generation should scale differently, and the system should be partially useful\/usable with a subset of remaining operational aspects. If a model became too big for our tooling, we could always fall back to plain text editors; they can edit files of any size. We also know how to generate from very big models: C++ compilers build up completely inlined files of several hundreds of megabytes; the aforementioned gigabyte-sized Xtext models are processed by generators.<\/p>\n<p>Practical experience with MPS shows scalability issues in several aspects. The default serialization format stores one model with all its root nodes in one XML file. Performance degrades seriously for larger models. Using any of the other default serialization formats (XML per root node; binary) helps a lot. The editor is always rendered completely. Depending on the editor implementation, it might be re-rendered by every model change, or even every cursor navigation. I\u2019m not aware of any comprehensive guide how to tackle editor performance issues (in my experience, we should try to avoid the flow layout for bigger parts of the editor). The biggest performance issue with possibly any structural system is the missing fallback: Once we have a model too big for the system (e.g. by import), it\u2019s very hard to do something about the model\u2019s size, as we would need the system to actually edit the model. Thankfully, we can still edit the model programmatically in most cases. Both validation and generation performance in MPS highly depends on the language implementation. The model-to-model transformation approach tends to use quite some memory; I\u2019d assume model-to-model transformations (with free model navigation) to be harder to optimize for memory usage than model-to-text transformation.<\/p>\n<h2 id=\"_model_evolution\">Model Evolution<\/h2>\n<p>Xtext does not provide any specific support for model evolution. As conceptual advantage of textual systems, we can migrate models with text processing tools. Search \/ replace or <em>sed<\/em> can be sufficient for smaller changes to model instances. As a drawback, we cannot store any meta-information in the model, but out of sight (and manipulation) of the user. Thus, we have to put version information in some way directly into our language content.<\/p>\n<p>MPS stores the used language version with every model instance. It detects if a newer version is available, and can run migration scripts on the instance.<\/p>\n<h2 id=\"_language_test\">Language Test<\/h2>\n<p>Most aspects of Xtext-based languages are implemented in Java (or another JVM language), enabling regular JUnit-based tests. Xtext ships with some utilities to simplify such tests, and to ease tests for parsing errors. Xpect, an auxiliary language to Xtext, allows to embed language-specific tests like validation, auto-complete and scoping in comments of example model instances. In practice, most transformation tests compare the generated output to some reference by text comparison.<\/p>\n<p>Naturally, MPS does not support (or need) parsing tests. It provides specific tests for editors, generators, and other language aspects. The editor tests support checking interaction schemes like cursor movement, intentions, or auto-complete. Generator tests are hardly usable in practice, as they require the generated output model to identical to a reference model, and don\u2019t allow to check intermediate models. The tests for other language aspects use language extensibility to annotate regular models with checks for validation, scoping, type calculation etc. MPS provides technically separated language aspects, and specific DSLs, for e.g. scoping or validation. They are efficient, but make it hard to test contained logic with regular JUnit tests.<\/p>\n<h2 id=\"_longevity\">Longevity<\/h2>\n<p>We can safely assume we will always be able to open text files once we can read the storage media. Text could even be printed. It\u2019s a bit less clear whether parsing technology in 50 years time will easily cope with the structures of today\u2019s languages. Today\u2019s (traditional, as described above) parsers would have a hard time parsing something like PL\/1, where any keyword can be used as identifier in an unambiguous context.<\/p>\n<p>If we stored structured models in binary, it might be very hard to retrieve the contents if the system itself was lost. If we used an XML dialect, we could probably recover the basic structures (containment + type, reference + type, metatype, property) of the model.<\/p>\n<p>Let\u2019s assume we lost the DSL system itself, and only know the model instances, or cannot modify the DSL system. (This scenario is not extremely unlikely\u2009\u2014\u2009there are a lot of productive mainframe programs without available source code.) I don\u2019t have a clear opinion whether it would be easier to filter out all the &#8220;noise&#8221; from a parsed text file to recover the underlying concepts, or to reassemble the basic structures from an XML file.<\/p>\n<p>In the more probable case, our DSL system is outdated, but we can still run and modify it, e.g. in a virtual environment. Then we can write an exporter that uses the original retrival logic (irrespective of parsing or structured model loading), and export the model contents to a suitable format.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>tl;dr If we started a new domain-specific language tomorrow, we could choose between different language workbenches or, more general, textual vs. structural \/ projectional systems. We should decide case-by-case, guided by the criteria targeted user group, tool environment, language properties, input type, environment, model-to-model and model-to-text transformations, extensibility, theory, model evolution, language test support, and [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[29,4],"tags":[],"class_list":["post-465","post","type-post","status-publish","format-standard","hentry","category-mps","category-xtext"],"_links":{"self":[{"href":"https:\/\/www.nikostotz.de\/blog\/wp-json\/wp\/v2\/posts\/465","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.nikostotz.de\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.nikostotz.de\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.nikostotz.de\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.nikostotz.de\/blog\/wp-json\/wp\/v2\/comments?post=465"}],"version-history":[{"count":20,"href":"https:\/\/www.nikostotz.de\/blog\/wp-json\/wp\/v2\/posts\/465\/revisions"}],"predecessor-version":[{"id":485,"href":"https:\/\/www.nikostotz.de\/blog\/wp-json\/wp\/v2\/posts\/465\/revisions\/485"}],"wp:attachment":[{"href":"https:\/\/www.nikostotz.de\/blog\/wp-json\/wp\/v2\/media?parent=465"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.nikostotz.de\/blog\/wp-json\/wp\/v2\/categories?post=465"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.nikostotz.de\/blog\/wp-json\/wp\/v2\/tags?post=465"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}