Before discussing the capabilities related to Canvas
, we talked about engineering practices, such as designing the Monorepo
architecture for projects and implementing the new packaging tool Rspack
, to manage the entire project and its hierarchical structure. Now let's go back to the content design related to Canvas
and discuss how to manage events and the ability to render multiple layers based on the lightweight DOM
we previously implemented.
Articles related to the Canvas
resume editor project:
I was recommended to use Canvas by an old miner, so I made a resume editor learning Canvas
Canvas Graphic Editor - Data Structure and History (undo/redo)
Canvas Resume Editor - Graphic Drawing and State Management (lightweight DOM)
Canvas Resume Editor - Layered Rendering and Event Management Capability Design
We mentioned earlier that we want to complete the drawing and interaction of Canvas
by simulating the DOM
, which is what we previously referred to as a lightweight DOM
. Here, it obviously involves two important aspects of the DOM
, namely DOM
rendering and event handling. Let's first talk about the rendering aspect. Using Canvas
is akin to setting all DOM
elements' position
to absolute
, where all rendering is done relative to the Canvas
element's position.
Thus, we need to consider overlapping scenarios. For example, let's consider elements A
with a zIndex
of 10
, its child element B
with a zIndex
of 100
, and another element C
at the same hierarchy level as A
with a zIndex
of 20
. In a situation where these three elements overlap, intuitively or based on their zIndex
values, element B
with the highest zIndex
value should be at the top layer.
However, running the code, we may find that the topmost element is C
(green), followed by B
(blue), and at the bottom is A
(red), even though their zIndex
relationships are C: 20 - B: 100 - A: 10
. So, through observation, we can conclude that zIndex
actually only considers elements at the same hierarchy level. For instance, if the zIndex
of A
is 10
and the zIndex
of B
(A's child element) is 1
, when these two elements overlap, the top element will be B
, indicating that child elements are usually rendered above parent elements.
Here, we also need to simulate this behavior. However, since we don't have a browser's rendering composite layer and can only operate on one layer, we need to render based on specific strategies. Similar to the rendering strategy of the DOM
, we render parent elements before child elements, resembling a depth-first recursive traversal for rendering sequence. The difference is that we need to sort child nodes based on zIndex
before traversing each node to ensure the rendering overlap relationship of same-level nodes.
Therefore, when operating on various hierarchical nodes, we need to incorporate hierarchical processing for each node's operation. Each node needs to implement some operations similar to the DOM
. To facilitate this, we can also implement other operations, such as adding caching and cleaning up cache relationships along the entire node chain.
Once our hierarchy of nodes is established, we can traverse to get the rendering order of current node's contents. Since we deal more with event handling here, we opt to reverse the order at this stage. As previously described, being a custom lightweight DOM
, we can introduce caching within it. The structure being tree-like, we can clearly see the hierarchy. Each node at a level can have a cache of its child nodes. For instance, the root
node can contain a collection of all node contents, which is a typical space-time trade-off method.
Adding caching entails designing a clear cache method. One straightforward approach is to consider that all append/remove
operations on nodes need to clear the cache of the current node and the entire chain up to the root
node. In a binary tree structure, if we operate on a node's right subtree, the left subtree's cache remains intact. Subsequently, when retrieving nodes for rendering, we can directly access the left subtree's cache for efficiency gain.
At this point, we can obtain all nodes in render order. Leveraging the previous plugin-based design and on-demand rendering capability, when we need to render a node, we simply call the drawingEffect
method, which will collect affected nodes through collectEffects
. One aspect to consider here is when a graphic undergoes a change. If, for instance, two graphics A
and B
overlap, with A
having a higher stack level than B
, changing properties of B
necessitates a redraw of both A
and B
. Failing to redraw A
would lead to the overlapping part of B
rendering over A
, hence the need to recalculate the rendering scope here.
So next up, after we've collected all the affected shapes, we need to continuously compose
all affected ranges
, which basically expands their affected area. Then, using batchDrawing
, we gather up the affected shape ranges within a period of events, and then proceed to uniformly draw out all nodes. It's important to note here that the order in which our collectEffects
retrieves nodes from the root
is the opposite of the event invocation order initially designed, so the traversal order to find affected nodes here is reversed.
In addition to rendering, we also need to consider event implementation, such as our selection state, click behavior, drag behavior, and more. Taking the example of adjusting the size with eight vertex elements, these points must be rendered above the selected nodes, regardless of the rendering order or event dispatch order. So if we need to simulate the onMouseEnter
event now, because these eight points overlap with the selected nodes, if the mouse moves to the overlapping point at this moment, the Resize
actual rendering position will be higher. Therefore, only the event of this point should be triggered, not the event of the selected nodes behind it.
Since there is no DOM structure present, we can only use coordinate calculations. Therefore, the simplest method here is to ensure the entire traversal order. That is to say, the traversal of higher nodes must be done before lower nodes. When we find this node, we end the traversal and trigger the event. We also need to simulate event capturing and bubbling mechanisms. As mentioned above, the order is actually opposite to the rendering. This means that the element rendered at the top is typically rendered last, but due to its location at the very top, event propagation should be dispatched first.
Assuming our three nodes A
, B
, and C
are as shown above, and assuming these nodes are currently overlapping, then without considering zIndex
, according to our previously designed rendering logic, the rendering order would be A -> B -> C
. At this point, node C
should be at the top, followed by node B
, and finally node A
. So in our event capture dispatch, the event call order should be C -> B -> A
, meaning the order is opposite to the rendering order. In fact, it is also worth noting that child elements are usually rendered above parent elements, meaning child elements are typically rendered later than parent elements. Therefore, in the capture event dispatch, child elements are usually scheduled before parent elements.
For our data structure, when traversing, we want elements at the top to take priority. The overall scheduling approach is more like a right subtree-first postorder traversal, meaning simply swapping the output of preorder traversal, left subtree, and right subtree positions. However, a new problem arises: in frequent events like onMouseMove
, calculating node positions each time using depth-first traversal is very performance-intensive. So here, let's discuss caching mentioned earlier. We store all child nodes of the current node in order, and if a node changes, we directly notify all layers of parent nodes to recalculate. This can be done on demand, saving time for the next calculation when another subtree remains unchanged. Since we store references to nodes, the overhead is minimal, effectively turning recursion into iteration.
Since the DOM event flow fits our event scheduling scenario very well, we also need to simulate it. Additionally, due to the existence of CSS
pointerEvents: none
, we can filter out these nodes by marking them as ignoreEvent
. For example, if we click on an element, that element becomes the final stage of event dispatch. Because this node is the event source, we have essentially found the current node. When simulating capture and bubble phases, recursive triggering is unnecessary, as it can be simulated through two stacks.
When simulating events, we need to start from the root node root
. Due to the previous design, there is no need to adjust the events of each Node
in a flat manner. Instead, we ensure that the events start from the ROOT
node and end back at ROOT
. The entire tree structure and state are managed by plugins using the DOM
API
, so we only need to deal with ROOT
management. This makes it very convenient, and thus our state management can be implemented based on this design.
Here, we continue to focus on the design of Canvas
related content, talking about how to manage the event dispatching order and the multi-level rendering capability based on the lightweight DOM
implementation we previously achieved. We have also implemented data caching for the overall rendering and event dispatching node order, simulated event capture and bubble dispatch, and briefly discussed on-demand rendering. Next, we will delve into implementing focus on the Canvas
canvas and the related topics of infinite canvas. Since we have already simulated events and systems based on DOM
, we can further discuss dragging, selection, box selection, resizing of nodes, and reference line-related issues. After completing the entire system's graphic plugin design, we can also research how to draw rich text on the Canvas
canvas.