One of the defining traits in design tools (e.g. architecture, electrical engineering, graphics design, etc) is whether the design paradigm is destructive or non-destructive.
In simple terms - a destructive paradigm means that only the result of an edit or modification is stored, whereas a non-destructive paradigm keeps track of the sequence of edits and dynamically computes the result from this sequence at render/compute/compile time.
To illustrate this, consider that you want to add pixel blur to a 2D image. A destructive approach would directly apply the blur to the baseline image - meaning the original image is no longer stored anywhere. A non-destructive approach would add a pixel blur “layer” on top of the baseline image, rendering the blurred image but also preserving the original image in it’s raw state in the base layer.
You can think about these architectural patterns as essentially a tradeoff between editability, understandability, and composability versus compute efficiency & system complexity.
Destructive design tools have greatly reduced computational requirements to run because they do not recompute the entire modification graph over and over as you add in more changes/edits. However, they are much more difficult to work with because you can not go back and remove or modify individual changes.
In the above example — the non-destructive editor would allow you to go back later and change the configuration of the pixel blur or even change the base image, and everything else would still work. These sorts of retroactive modifications would not be possible in the destructive editor.
More broadly, non-destructive editing paradigms allow for much more “composability” in system design via functional editing paradigms — you can save a sequence of mutations/modifiers as a “function”, combine that with other mutations to create higher order“functions, and repeat ad-nauseum.
Non-destructive editors are also generally easier to reason about — the user can see what the final result was built up from. But, because they are storing this very complex compute graph which must be re-run every time a change is made or you need to show the result, they are much more computationally intensive to run. Similarly, it is much more complicated to build a non-destructive editing tool.
While virtually all tools in the software engineering & product design world are non-destructive — programming languages and software are a great example of a purely non-destructive paradigm, Figma is built entirely around layers and never does direct pixel manipulation - what is interesting is that a lot of the design tooling world is still destructive.
Photoshop is a good example of this — most filters & effects in Photoshop are destructive by default. While it is possible to use Photoshop in a way that makes such changes non-destructive via features like Smart Objects, it is not the “default” paradigm and it is something the user must think about and learn when using the tool.
Similarly, the majority of 3D design tools that revolve around meshes/triangles are destructive or only allow very specific subsets of their functionality to be modeled non-destructively (e.g. Blender). A notable exception to this is many of the newer parametric design tools in CAD such as Grasshopper and Fusion360. In such tools, everything is defined in terms of dimensions and constraints — e.g. this angle is 30 degrees, these lines are parallel — and you can go back at any point to alter those previous dimensions, which then recalculates the entire 3D object.
The impact of computational advances on destructive editing
Given that computation is the primary bottleneck for non-destructive design paradigms, it is interesting to consider how recent advances in computing are allowing for a plethora of new, natively non-destructive design tool startups to be built. For example:
Modyfi makes use of GPU acceleration via WebGPU to offer a fully non-destructive image & motion graphics editing tool. All visual & motion effects in Modyfi are layers, yet everything is still instantaneously previewed & rendered.
Modumate is a browser-native architecture tool built on top of recent 3D game engine improvements, which models buildings not just as surfaces, but as collections of 3D parts (e.g. doors, knobs, handles, walls, studs) which compose into the 3D model of the house.
Womp utilizes edge computing, pixel streaming, & ML-assisted rendering techniques to offer a non-destructive 3D editing tool. You create 3D shapes in Womp not by crafting the boundary conditions of your desired shape, but by combining different baseline shapes which then intersect, add, or subtract into your desired shape.
NTop is a 3D design tool for physical parts/components based on implicit modeling rather than boundary representations like meshes. 3D parts are created via complex combinations of fields and shapes, rather than simply defined by a triangle or mesh topology. This allows for much more sophisticated analysis, simulation, and optimization of 3D parts to be done, and also makes it much more feasible to build highly complex geometries.
Both Womp and NTop are built around signed-distance-fields (SDFs), a mathematical formulation for representing geometries which has been understood for a long time, but was not widely adopted until recently due to its computational requirements. You can read more about the underlying mathematics of creating 3D shapes in this way here.
A critical insight underlying all of these startups is - once it is technically feasible to overcome the computational limitations of non-destructivity in a given design tool category, you often then have the substrate to build a 10-100x better design tool. This improvement comes not just from the benefits that have already been discussed, but also a wide array of second order effects that stem from modeling a system non-destructively, such as:
Community
Modeling systems as a series of mutations allow things to be encapsulated as re-usable functions and components. This can become the basis of a community where people can share components they have built that others can then use, edit, fork, etc. This is “obvious” in the software field, where anyone can create libraries that can be installed via package managers, yet is non-existent in so many other design domains.
Modumate is a great example of this - they offer a marketplace of community and company-sourced pre-built architectural components. This is not really possible in traditional BIM engines which only model the final surface geometry.
Simulation
Systems modeled non-destructively are much more amenable to simulation. It is easier to “sweep through” a range of different configuration options for each layer or node in your compute graph, testing or evaluating the final output.
Grasshopper’s procedural generation workflows are good classic examples of this.
Optimization
Systems modeled non-destructively can be optimized at a “meta” level by the compute engine, which can look at all the changes that should be applied together holistically and “compile” them down to something more efficient.
A simple, somewhat contrived example of this in the graphics domain might be as follows - if you apply a series of 10 visual modifiers to an image and then the 11th layer is a new image with 100% opacity, then you can ignore all underlying layers at render time. Modyfi is able to do real time motion graphics in the browser via optimizations of this sort.
Higher order system design
Non-destructive modeling typically allows for much more complex objects or systems to be built because of its composability benefits. This makes it easier to encapsulate logic, divide it amongst different people on a team, test sub-systems, and similar.
This is a key reason why many of the functional 3D modeling domains such as computer-aided-design for industrial design, electronics, and mechanical engineering have moved so aggressively to parametric, non-destructive workflows - these are exceptionally rich, complex systems which benefit particularly from a non-destructive paradigm. In contrast, 3D models for rendering (e.g. animations, videos, etc) are in theory less rich and complex.
The Startup Opportunity Around Non-Destructive Editing
I observe that rapid advances in many areas of computing, including graphics, machine learning, edge databases, pixel streaming, distributed systems, hardware acceleration, and embedded processing in the browser (e.g. Web Assembly) are fundamentally changing what is computationally feasible in many design tools today.
I suspect that in many cases, this confluence of technology advances suddenly allows a purely non-destructive paradigm to be applied in various design tool categories. As a result, I think we will see many more startups emerge along the lines of Modyfi, Womp, and NTopology which build around non-destructivity as a core wedge to rethink their category.
I think one of the most compelling variations of this is going after categories where non-destructivity is possible, but requires a specialized workflow - photoshop and blender both being good examples. Such products tend to get bloated with a lot of UX complexity, as non-destructivity gets added in bits and pieces on top a fundamentally destructive baseline, requiring significant user education and the user to maintain a mental model of how they have modeled each piece of their system. When you can instead make non-destructivity the ubiquitous default, you actually simplify the product while simultaneously enhancing what can be done with it.