Table of Contents
- 1 Introduction
- 2 Making Components
- 3 Shadow DOM
- 4 Events
- 5 Templates
- 6 Slots
- 7 Components and Styles
- 8 Making Single Page Apps
- 9 Professional Components
-
Introducing @nyaf
- Elevator Pitch
- Parts
- Project Configuration with TypeScript
- Project Configuration with Babel
- Components
- The First Component
- Template Features
- n-repeat
- JSX / TSX
- Examples
- Select Elements
- Smart Components
- The Life Cycle
- State and Properties
- Directives
- Events
- Router
- Shadow DOM
- Services
- Forms Module
- View Models
- Data Binding
- Creating Forms
- Smart Binders
- Validation
- Additional Information
- Custom Binders
- Installation of Forms Module
- The Flux Store
- Type Handling in Typescript
- Global and Local Store
- Disposing
- Effects Mapping
- Automatic Updates
- Installation
- Notes
This book explains Web Components. Additionally, it shows how to create a simple and small layer (a so-called thin library) around the native HTML 5 API to make your life as a developer a lot easier. Such a library is available as an Open Source project called @nyaf – Not Yet Another Framework. It’s not a requirement, but reduces the hurdles to use Web Components significantly and avoids the jump into the full blown frameworks and libraries such as Angular or React.
Who Should Read this Book?
This book is aimed at both, beginners and experienced web developers. The code is mainly TypeScript, few examples are pure ECMAScript.
In any case, I tried not to ask any prerequisites or conditions to the reader. You do not need to be a computer scientist, not in perfect command of language, don’t need to know rocket science. No matter in what context you have encountered on Web Components, you should be able to read this text. However, the most benefit from this book gets everybody already working on frontend stuff. Especially those overwhelmed with frameworks, techniques and monstrous project structures will learn what modern Web development has to offer. Nowadays all modern browsers are able to execute ES2015 and above natively and transpilers such as Babel or TypeScript makes it easy to adapt.
What You Should Know
Readers of my books have hardly any requirements. Some HTML cannot harm and who already have seen a static HTML page (the source code, of course) is certainly well prepared. I assume that you have at least a current operating system, on that you will find an editor with which you can create web pages.
Web Components are not that demanding. But they are made by an API on top of the HTML 5 API. This API is offered through JavaScript. Hence, as an additional requirement, you should be able to read JavaScript and have a basic understanding of TypeScript.
As You Can Read this Text
I will not dictate how you should read this text. In the first draft of the structure, I have tried several variations and found that there exists no ideal form. However, reader tend today to consume smaller chunks, independent chapters, and focused content. This book supports this trend by reducing it to a small issue, focused and with no “blah-blah” for the inflation of the volume.
Beginners should read the text as a narrative from the first to the last page. Those who are already somewhat familiar can safely skip certain sections.
Conventions used in the Book
The theme is not easy to master technically, because scripts are often too wide and it would be nice if one could support the best optical reading form. I have therefore included extra line breaks used to aid readability, but that’s not required in the editor of your development environment.
In general, each program code is set to a non-proportional font. In addition, most scripts have line numbers:
If you think you need to enter something in the prompt or in a dialog box, this part of the statement is in bold:
$ npm install typescript
The first character is the prompt and is not entered. In the book I use the Linux prompt from the bash shell. The commands will work, without any exception, unchanged even on Windows. The only difference then is the command prompt C:> or something similar at the beginning of the line. Usually the instructions are related to relative paths or no paths at all, so the actual prompt shouldn’t matter despite the fact that you shall be in your working folder.
Expressions and command lines are sometimes peppered with all types of characters, and in almost all cases, it depends on each character. Often, I’ll discuss the use of certain characters in precisely such an expression. Then the “important” characters with line breaks stay alone and in this case, too, line numbers are used to reference the affected symbol in the text exactly (note the : (colon) character in line 2):
The font is non-proportional, so that the characters are countable and opening and closing parentheses are always in the same column.
Symbols
To facilitate the orientation in the search for a solution, there is a whole range of symbols that are used in the text.
Preparations
To use the code in the book you need this:
- A machine with NodeJs v10+ installed. Any desktop OS will do it, whether as Windows, MacOS, or Linux. Windows users can use any shell, WSL or CMD or Powershell. Of course, any shell on Linux is good enough, too.
- An editor to enter code. I recommend using Visual Studio Code. It runs on all mentioned operating systems. Webstorm is also an amazing powerful editor.
- A folder where the project is being created. Easy enough, but keep your environment clean and organized like a pro.
This book comes with a lot of examples and demo code. It’s available on Github at:
The folders are structured following the book, chapter by chapter.
If you’re relatively new in the Web development field, test your knowledge by cloning the repo, bring the examples to live and watch the outcome. Read the text and add your own stuff once you know that the environment is up and running.
Using the @nyaf Library
The author of this book has years of experience with Web components. After several projects, smaller and bigger ones, the frustration was growing about the lack of support for simple tasks and the burden of huge frameworks that intentionally solve these burdens, but come with an overwhelming amount of additional features. None of the frameworks felt right. I suspect that most developer support code has a similar trigger, so I decided to start my own library project. It’s called nyaf - Not Yet Another Framework. It’s just a thin library, few Kilobytes only, and it solves just the basic needs:
- A thin wrapper to handle Web components in TypeScript code very well
- A router to have full single page application support
- Support for data binding and form validation
- A Flux based store to get a professional architecture
It’s split into three parts, so you just use what you want and skip the features not (yet) needed.
About the Author
Jörg works as a trainer, consultant and software developer for major companies worldwide. Build up on the experience of 25 years of work with web and many, many large and small projects.
Jörg believes it is especially important to have solid foundations. Instead of always running to create the latest framework, many developers would be better advised to create and provide a robust foundation.
Jörg has written over 60 titles in the renowned and prestigious specialist publishers in German and English, including some bestsellers for Carl Hanser, Apress, and O’Reilly.
Anyone who wants to learn this subject compact and fast, its right here. On his website www.joergkrause.de much more information can be found.
Contact the Author
In addition to the website, you can also direct contact him over at www.IT-Visions.de. If your business needs professional advice about web topics or a continuing education/ training session for software developers, please contact Jörg through his Website or book directly via http://www.IT-Visions.de.
1 Introduction
Web components are a set of standards to make self-contained components: custom HTML-elements with their own properties and methods, encapsulated DOM and styles. The technology is natively supported by all modern browsers and does not require a framework. The API has some quicks and quirks though. I will explain those obstacles it in great detail, but it’s helpful to know that you can make your life easier. A thin wrapper library to handle common tasks is the answer. This is what the @nyaf – Not Yet Another Framework – code is for. A full description can be found in the appendix. However, all examples and explanations within the book chapters are completely independent. Of course you can use any other component library.
1.1 The Global Picture
This section describes a set of modern standards for Web Components.
Components
The whole component idea is nothing new. It’s used in many frameworks and elsewhere. Before we move to implementation details, imagine how the internals of a page in a browser is being described. You have a tree of simple elements, defined by the language HyperText Markup Language (HTML). You also have the ability to describe the appearance of each element using Cascading Style Sheets (CSS). You have the ability to manipulate both parts dynamically at runtime using ECMAScript (also known as JavaScript). The most important point in this description is the word “tree”. Elements form a tree, where one or more elements are the children of another one.
If the basic structure of a page is already a tree of smaller parts (see Figure 1-1), it makes sense and simplifies development, if on a higher level the elements form a tree too. Such a unit, hierarchical collections of functionality that can form a tree, is called a component.
A page hence consists of many components. Each component, in its turn, has many smaller details inside. At the end, it’s still pure HTML.
The components can be very complex, sometimes more complicated than websites itself. How such complex units are created? Which principles we could borrow to make our development same-level reliable and scalable? Or, at least, close to it.
Component Architecture
The well known rule for developing complex software is: don’t make complex software. If something becomes complex – split it into simpler parts and connect them in the most obvious way. A good architect is the one who can make the complex simple to handle for the developer. (That’s not the same as an UX designer, who makes the complex application simple to use for the end user; but that’s an entirely different story.)
You can split user interfaces into visual components: each of them has its own place on the page, can “do” a well-described task, and is separate from the others.
Let’s take a look at a website (see Figure 1-2), for example Twitter. It naturally splits into components:
- Top navigation
- Main menu
- User profile
- Tweet feed
- Suggestions
- Trending subjects
Components may have sub-components, e.g. messages may be part of a higher-level “message list” component. A clickable user picture itself may be a component, and so on. It boils down to HTML eventually. If there is no more simplification a native element forms a leave in the tree. The profile branch may end with an <img>
tag, then.
How do we decide, what a component is? That comes from intuition, experience and common sense. Usually it’s a separate visual entity that we can describe in terms of what it does and how it interacts with the page. In the case above, the page has blocks, each of them plays its own role, it’s logical to make these components. If you’re new to this software architecture, it’s a good advise to keep a component smaller than the typical size of your screen. In reality, that means the lines of code that form the component shall fit on your standard monitor using your favorite font size. For me, it’s a maximum of 100 lines of code. If my components grow, I try to split them in smaller chunks. However, always keep the logical structure in mind. If two parts of a component differ significantly and both may use only 25 lines of code, it’s still a good idea to split it up and have clean code instead of clinging to the 100 lines1 rule.
Parts of a Component
A component has several parts. These can be splittet into several files or appear in just one file. It mainly depends on the environment you use and the strategy to create, compile and deploy the final code. In a logical view these are the parts:
- A JavaScript or TypeScript class.
- A DOM structure, managed solely by its class, so outside code doesn’t access it (the “encapsulation” principle).
- CSS styles, applied to the component. These can be isolated or global.
- An API: events, class methods etc. that interact with other components or application parts.
Once again, the whole “component” thing is nothing special. It’s just a clever approach to handle the complexity of web pages in a way an average human being can understand.
There exists many frameworks and development methodologies to build them, each with its own bells and whistles. Usually, special CSS classes and conventions are used to provide “component feel” – CSS scoping and DOM encapsulation. “Web components” provide built-in browser capabilities for that, so we don’t have to emulate them any more. That’s one of the most powerful developments we saw in the recent years to raise in the realm of web development. Unfortunately, the “component frameworks”, especially Angular, Vue, and React seem to be seen by developers as the final solution, the solely way to create components. We understand that’s because it brings users and makes the framework more useful. But it’s not entirely true. The native stuff is almost as good as these frameworks as you will see soon. However, don’t ask an Angular fellow. What should she say?
Web Components bring some basic features that makes them so extraordinarily useful.
- Custom elements – to define custom HTML elements.
- Shadow DOM – to create an internal DOM for the component, hidden from the other parts of the app.
- CSS Scoping – to declare styles that only apply inside the Shadow DOM of the component.
- Event re-targeting and other minor stuff to make custom components better fit the development requirements.
In the next chapters I’ll go into details of such components – the fundamentals and well-supported features of web components that are really good on its own. Also, some smart stuff written around it will be explained to show how you can work very closely on the comfort level of Angular or React without actually using them.
1.2 The Raise of Thin Libraries
After digging deeper into the Web Component world it seems that there is no need for a full framework like Angular or React anymore. However, some repeating tasks are boring and error prone. Hence a small layer around the basic API would be helpful. That was the beginning of the famous @nyaf thin library. It’s not called a framework because even with two more modules that became part of the package it’s very small, indeed. These additional modules are first @nyaf/forms that is responsible for bi-directional binding and validation. Second, the @nyaf/store module is a simple yet powerful Flux based store. It simplifies the architecture of huge applications dramatically.
One of the clear approaches from the beginning was the avoidance of dependencies. You need this and nothing more. Another approach is the interaction with any existing library. Even pure jQuery code will not harm the usage of @nyaf. And, finally, it’s pure ES2015+ and there are no polyfills or additions for elder browsers. Modern browsers have a market share of 96% and that’s what you target. The full documentation is added as an appendix to this book for your reference.
Single Page Apps
A single-page application (SPA) is a web application or website that interacts with the web browser by dynamically rewriting the current web page with new data from the web server, instead of the default method of the browser loading entire new pages. The goal is faster transitions that make the website feel more like a native app.
In a SPA, all necessary HTML, JavaScript, and CSS code is either retrieved by the browser with a single page load, or the appropriate resources are dynamically loaded and added to the page as necessary, usually in response to user actions. The page does not reload at any point in the process, nor does control transfer to another page. The location hash or the HTML5 History API can still be used to provide the perception and navigability of separate logical pages in the application.
Web Components make it easy to create SPAs. The main part is a feature called “router”. The router routes a call (usually a click on a hyperlink or button) by using an assigned URL to some kind of management code. That code creates a new tree of components and moves it to a particular target element. The browser reacts to this operation by rendering the elements. The developer has to make these decisions to get it working:
- Define a target – an element where the replaceable tree appears. We call this usually an “outlet”.
- Create a definition that maps routes to components. This is a “router configuration”.
Again, some convenient stuff can be created to make your daily life easier. See chapter “Single Page Apps” for more details of possible implementations.
The HTML 5 API
The HTML 5 API is amazingly powerful and covers a wide range of features. All existing frameworks and libraries – with no exception – are build on top of this API. The advantage of using a certain framework is primarily that you get a simplified view, a reduced view, a more elegant API style or even more robust code made by additional error handling. These are all good reasons to use a framework or library, ain’t these?
Imagine you know all these APIs. What would happen is that you can avoid few of the libraries and probably a whole framework. The code is finally smaller, faster, and easier to maintain. Learning the HTML 5 APIs is essential for web developers nowadays.
The Template Language
A template language simplifies the creation of forms. It’s not enforced, you can af course use the basic API and pure HTML. In complex applications you’ll see that there is lot of repeating code. There are many template languages available and I’ll present few of them so you can compare and choose freely.
Smart Decorators
Instead of splitting the definition and registration the component itself covers all necessary information as meta-data. Decorators are feature of the TypeScript compiler and eventually they will become part of the ECMAScript standard.
This coding style supports the “separation of concern” principle and is easy to implement. A final solution could look like this:
The decorator CustomElement is called in the instantiation process of the class. It can access both, the underlying function definition and the instance. Here you can manipulate the code further (at runtime) by adding hidden properties, for example. Other code fragments may access these properties and act according these hidden instructions. In the above example some external code may see this and take ist as an “please register me” instruction. The advantage here is that the component developer doesn’t need to think about such infrastructure stuff and the code is much smaller and easier to read.
TypeScript
TypeScript is not covered in this book. It is, however, the language used to write components and related libraries. It’s not exactly necessary for writing Web components, but it’s a strong tool in the developers toolset. The ability to transform JSX has already been explained and if you don’t use TypeScript you have to replace one tool with another. So avoiding it gains nothing, while embracing it gives a bunch of advantages.
One of the main reasons for its success is, that valid JavaScript is valid TypeScript. Any ES2015 example shown here will be accepted by the TypeScript transpiler. What’s added is the ability to use features from newer JavaScript versions such as ECMAScript 2020 today, even if the browser do not have full support yet. And it adds types that reduce error prone code. In short it is this:
TypeScript = JavaScript + Type System
TypeScript is compatible with ECMAScript2 2018 and provides necessary polyfills.
WebPack
WebPack is an open-source JavaScript module bundler, primarily for JavaScript, but it can transform front-end assets like HTML, CSS, and images if the corresponding loaders are included. Webpack takes modules with dependencies and generates static assets representing those modules. The dependencies and generated dependency graph allows web developers to use a modular approach for their web application development purposes. It can be used from the command line, or can be configured using a configuration file which is named webpack.config.js3. This file is used to define rules, plugins, etc., for a project.
WebPack is highly extensible via rules which allow developers to write custom tasks that they want to perform when bundling files together. NodeJs is required for using webpack, hence it’s a command line tool running at development time.
1.3 Compatibility
For every new technology it takes some time until all browsers and tools have a fully working implementation. The great news about the year 2020 is, that meanwhile all browsers (see Figure 1-3) have full support and you rarely need a polyfill.
1.4 Other Libraries
Apart from the one I feature in this book, @nyaf, there are few others that I found worth mentioning here. They differ in focus and quality. It depends on your project and feature requirements which one suits better. It’s also a good idea to analyse them and learn how things work internally and consider going with some code where you have the ownership. The following list is pulled from webcomponents.org:
- Hybrids is a UI library for creating web components with simple and functional API. The library uses plain objects and pure functions for defining custom elements, which allow very flexible composition. It provides built-in cache mechanism, template engine based on tagged template literals, and integration with developer tools.
- LitElement uses lit-html to render into the element’s Shadow DOM and adds API to help manage element properties and attributes. LitElement reacts to changes in properties and renders declaratively using lit-html.
- Polymer is a web component library built by Google, with a simple element creation API. Polymer offers one- and two-way data binding into element templates, and provides shims for better cross-browser performance.
- Skate.js is a library built on top of the W3C web component specs that enables you to write functional and performant web components with a very small footprint. Skate is inherently cross-framework compatible. For example, it works seamlessly with – and complements – React and other frameworks.
- Slim.js is a lightweight web component library that provides extended capabilities for components, such as data binding, using ES2015 native class inheritance. This library is focused for providing the developer the ability to write robust and native web components without the hassle of dependencies and an overhead of a framework.
- Stencil is an open source compiler that generates standards-compliant web components.
2 Making Components
We can create custom HTML elements, described by a class, with its own methods and properties, events and so on. Once a custom element is defined, we can use it on par with built-in HTML elements. These elements are called Web Components.
2.1 Basics
That’s great, as HTML dictionary is rich, but not infinite. There are no <easy-tabs>
, <sliding-carousel>
, or <beautiful-upload>
. Just think of any other tag we might need.
We can define them with a special class, and then use as if they were always a part of HTML.
There are two kinds of custom elements:
- Autonomous custom elements – custom elements, extending the abstract
HTMLElement
class. - Customized built-in elements – extending built-in elements, like a customized button, based on
HTMLButtonElement
etc.
First I’ll cover autonomous elements, and then move to customized built-in ones.
To create a custom element, we need to tell the browser several details about it: how to show it, what to do when the element is added or removed to page, etc. That’s done by making a class with special methods. That’s easy, as there are only few methods, and all of them are optional.
A sketch with the full list is shown in Listing 2-1:
All methods shown are optional, implement those you really need only. Be aware that under certain circumstances the methods might be called multiple times.
After defining the component, we need to register it as an element. We need to let the browser know that <my-element>
is served by our new class.
Now for any HTML elements with the tag <my-element>
an instance of MyElement is created, and the aforementioned methods are called. We also can use document.createElement('my-element')
to create the element through an HTML 5 API call and attach the element later to the DOM.
A custom element name must have a hyphen -, e.g. my-element and super-button are valid names, but myelement is not.
To load and use the component you need an HTML page. This could look as simple as shown in Listing 2.2.
It’s recommended to wait for the document ready state before applying the registration. This might or might not change the behavior – it depends on the inner construction of the component’s content. But waiting for the ready event seems to fix some common issues and has rarely any disadvantages.
A First Example
For example, there already exists a <time>
element in HTML, for date and time information. But it doesn’t do any formatting by itself.
Let’s create <time-format>
element that displays the time in a nice, language-aware format as shown in Listing 2-3.
To use it, the following piece of HTML is necessary (Listing 2-4).
The class has only one method connectedCallback()
– the browser calls it when a <time-format>
element is added to the document and it uses the built-in Intl.DateTimeFormat
2 data formatter, well-supported across the browsers, to show a nicely formatted time. We need to register our new element by customElements.define(tag, class)
. And then we can use it everywhere. The output is shown in Figure 2-1.
Observe Unset Elements
If the browser encounters any <time-format>
elements before customElements.define
got called, it will not produce an error. The element is yet unknown, just like any non-standard tag. It will render into nothing. That’s hard to capture. To make it visible we could add an style that’s using the pseudo CSS selector :not(:defined)
.
When customElement.define
is called, the element is “upgraded”. A new instance of the TimeFormat class is created for each element, and the connectedCallback
method is called. The element becomes :defined
, then.
A very helpful stylesheet to achieve the visibility of not yet upgraded components is shown in Listing 2-5:
In the example the JavaScript part is missing (to simulate an error) and hence the style makes the element visible. The result is shown in Figure 2-2:
Custom Elements API
To get information about custom elements, there are two helpful methods:
-
customElements.get(name)
– returns the class for a custom element with the given name
- customElements.whenDefined(name)
– returns a promise that resolves (without value) when a custom element with the given name becomes upgraded.
It’s important to start the rendering in connectedCallback
, not in the constructor. In the example above, element content is rendered (created) that way. The constructor is not suitable. When the constructor is called, it’s yet too early. The element is created, but the browser did not yet process and assign attributes at this stage. For instance, calls to getAttribute
would return null
.
An additional reason is performance. In the further stages of the render process some code might decide not to render the element or replace the render content with some message. Imagine a grid, which might become huge, but due to some attribute setting it’s replaced by a “to many data” message. Processing all attributes first and render then makes sense.
The connectedCallback
method triggers when the element is added to the document. Not just appended to another element as a child, but actually becomes a part of the page. So we can build detached DOM, create elements and prepare them for later use. They will only be actually rendered when they make it into the page. In the first examples that’s always the case, because the element is written directly into the page. However, in a more dynamic environment, such as a Single Page App (SPA), this would not be the same.
2.2 Observing Attributes
In the current implementation of <time-format>
, after the element is rendered, further attribute changes doesn’t have any effect. That’s strange for an HTML element. Usually, when we change an attribute, like href of an anchor element, we expect the change to be immediately visible. All kind of effects and animations need this behavior.
We can observe attributes by providing their list in the observedAttributes()
method. It’s static (not part of the prototype), because it’s global definition once for all instances of the element. For such attributes, the method attributeChangedCallback
is called when they are modified. It doesn’t trigger for any other attribute for performance reasons.
Listing 2-6 shows a new <time-format>
version that auto-updates when attributes change.
The usage doesn’t look much different from the first example.
However, when you change one of the observed attributes in your code, the element re-renders automatically:
The rendering logic is moved to render() helper method. We call it once when the element is inserted into the document. After a change of an attribute, listed in observedAttributes()
, the method attributeChangedCallback
triggers and re-renders the element. The call to the render method must be implemented in the component. On the first look it’s not much magic here and in comparison with frameworks such as Angular or React it might feel primitive. But the ability to control the rendering and have a clear cycle makes the implementation very handy and straight.
Attribute Data
The component itself can handle only scalar values. That means you’re limited to string
, boolean
, and number
. Anything else will run through toString()
and may end up as something like [Object object] or, in case of null
as a string “null”. That’s a lot weaker than the binding we can see in Angular, for example. Of course, a private implementation can detect such types and use JSON.stringify
and JSON.parse
. That is, indeed, the slowest but most robust way to serialize complex data.
Another way is to ditch the usage of observed attributes completely and use a programmatic way. This would, however, limit the component to be accessible by code only. That’s a serious limitaton indeed, but let’s explore an example (Listing 2-7) anyway to give you the idea.
The key is line 19 with access to the custom property content. Here we assign an object and call the render method immediately. That bypasses the conversion to string
type and keeps the stringifyer working as expected. Figure 2-3 shows the expected result.
The observation is still present to allow a static value for initialization (see Listing 2-8).
Discussing the Options
To monitor external data without the ability to use the observation you may also consider using a Proxy
object and optionally the MutationObserver
class. Both ways use ES 2015 classes and are hence native APIs. A Proxy
gives you the ability to intercept the access to an object’s properties. Whatever and whenever some code outside accesses a property, a callback is being called. This could be used to trigger the render method.
The MutationObserver Type
A MutationObserver
on the other hand monitors the DOM itself and calls a callback if something changes. However, this function runs in a mikrotask and is not entirely synchronous. That means, the new attribute value (if observed) may not have the current value already. To not expose a risk of non-deterministic behavior the MutationRecord
instance that the callback returns will not give access to the new value. Of course, there are several ways to patch the prototype or add an interceptor, but it all feels hackish and not very reliable. The biggest risk is that the API changes internally without any notification and the code will fail eventually out of nowhere with a simple browser update. I made some experiments with this and never found a satisfying solution. Just in case you want to dig deeper into this, the following code snippet gives you the general usage of such an observer:
Proxy
A Proxy
class is pure JavaScript and handles just an object. But, a Web component is an object, so this sounds like a feasible solution. However, the amount of boilerplate code is significant. In a real life scenario you would move this to a base class, but the example shows nonetheless the weakness of HTML 5 API (and it’s power, too).
Again, there is no way to use the programmatic access here:
document.querySelector('obj-element').content = ...
Because the attribut observation is still operational the external access from HTML would work too. Insofar it does exactly what’s expected. The external access and the programmatic access both write into the property content. That’s observed by the proxy handler’s setter path. Here we look that’s really an observed attribute (line 10) and trigger the regular attribute observer (line 11). The difference is, that the value received by the proxy is still an object (while the internal API would have called toString
first and delivers [Object object]). Now we can transform it into the stringified version and store this in the attribute. The actual render code expects an object (line 21). To make it working and makes use of the JSON.parse
method to return an actual object in a transparent way, the getter method at the end (line 47) transforms the string back.
2.3 Rendering Order
When the browser’s HTML parser builds the DOM, elements are processed one after another, parents before children. Imagine we have something like that:
<outer-element><inner-element></inner-element></outer-element>
Then an <outer-element>
element is created and connected to DOM first, and then <inner-element>
.
That leads to important consequences for custom elements. For example, if a custom element tries to access innerHTML
in connectedCallback
, it gets nothing:
If you run it, the alert is empty. That’s exactly because there are no children on that stage, the DOM is unfinished. HTML parser connected the custom element <user-info>
, and is going to proceed to its children, but they’re not here yet.
If you like to pass information to custom elements, we can use attributes. They are available immediately.
Delay Access
If we really need the content immediately, we can defer access to them with a zero-delay setTimeout
.
Now the alert shows “Joerg”, because we run it asynchronously and the HTML parsing is complete. Of course, this solution is not perfect. If nested custom elements also use setTimeout
to initialize themselves, then they queue up: the outer setTimeout
triggers first, and then the inner one. And that’s simply the wrong order.
Let’s demonstrate that with an example:
Output order:
- outer connected
- inner connected
- outer initialized
- inner initialized
We can clearly see that the outer element finishes initialization before the inner one.
There’s no built-in callback that triggers after nested elements are ready. If needed, we can implement such thing on our own. For instance, inner elements can dispatch events like initialized, and outer ones can listen and react on them.
Introducing a Life Cycle
The current implementation of a Web Component is very simple. That makes it easy to get in the first step, but some additional implementation effort is required. One idea to solve the issues with the last example is to introduce a loading callback. Once the render stage has passed the component fires a event or – even better – resolves a Promise
. The outer component can wait for this to happen and proceed once the children confirm they are done. Again, a not so good working example:
Let’s assume a piece of HTML like this:
The expected render output would be this one:
If you execute this “as is”, the result is wrong, as shown in the Figure 2-4:
Look at the DOM, the <inner-element>
comes after the <hr>
, which is not what we expected.
So, how could we make the outer component to wait for all the children? Attaching events to the component itself wouldn’t be an option, because the content might be simple static text and a text node can’t trigger events.
Waiting for other custom elements would be easy. We could just loop through all outer elements, look weather they have a specific state and wait. Also, to make it easier to handle, the render method could be async
. But in case of regular HTML and static text this will not work. One proposal is to set an explicit trigger for the render part:
The element <content-done />
is such a trigger. Once it occurred it will call the render method of the outer-most element. The whole code is in the next example (Listing 2-11).
There are two critical parts here. First, the outer-most element must be prepared to receive a call. In the example shown above it’s the custom render method. Second, the trigger element looks a bit awkward and is some context knowledge the template developer need to have; a thing we usually try to avoid.
If you watch carefully on the console output you see that the inner part is called twice. First, the immediate call from the render engine once the component is registered. Second, the call from the custom trigger element. While this seems in the demo not relevant, it could bring a serious performance penalty ones in a huge and more complex application.
If you have custom components only, this render approach is much more feasible.
Here the <inner-element>
could call render and if that happens for all elements it will work smoothly. You have just to test whether the content exists to let the inner most element render itself immediately. The @nyaf thin library shown in the appendix is doing exactly this, along with other libraries such as Polymer or Lightning.
An even better way is to wait for elements being available. That can be made by just delaying the component registration after document ready event.
Here the code looks for an already available ready state or the finishing event.
This works very well for an application with a static appearance. However, if you have components loaded dynamically, as it happens in a Single Page App with a router logic, this will not work. The final (and best) solution depends on the kind of app you write and may change over time.
2.4 Customized Built-in Elements
New elements that we create, such as <time-format>
, don’t have any associated semantics. They are unknown to search engines, and accessibility devices can’t handle them. Such things can be important, though. A search engine would be interested to know that we actually show a time. And if we’re making a special kind of button, why not reuse the existing <button>
functionality?
We can extend and customize built-in HTML elements by inheriting from their classes. For example, buttons are instances of HTMLButtonElement
. Extend HTMLButtonElement
with a new class:
Provide a third argument to customElements.define
, that specifies the type to extend by using the tag’s name:
customElements.define('hello-button', HelloButton, {extends: 'button'});
There may be different tags that share the same DOM-class, that’s why specifying extends
is needed. To get the custom element, insert a regular <button>
tag, but add the is
attribute to it like this:
<button is="hello-button">...</button>
Here’s a full example:
The new button extends the built-in one. That means it keeps the same styles and standard features like the disabled
attribute.
If you prefer using the API, especially the document.createElement
method, then you’ll find a second parameter that is an object with just one property, is
.
The component itself must be registered as before, but now with the extension instruction:
That way you can modify any existing element and there is no need to create custom tags from scratch.
2.5 Advantage of TypeScript
In the previous examples I used only pure ES2015 code. The TypeScript syntax would be similar. However, using TypeScript’s features could make it even easier to handle certain tasks. One of the important parts is the handling of attributes. As already shown in the previous examples the observedAttributes
method is responsible to trigger the observation. To access those attributes means calling methods like this.getAttribute('name')
. And here lies a culprit. The usage of strings is error-prone. A much better way would it if we could use named properties, like this.name
.
Using Generics
The key is a generic. In TypeScript you can assign a type to a concrete type placeholder to achieve this. However, a full implementation is quite tricky and it would be nice if we can handle this internally and not bother the developer with all the details. Let’s work this out step by step.
The component shall finally look like this:
As you can see the getAttribute
calls are replaced by this.data calls that Intellisense understands through the generic of the base class BaseComponent. However, this will not work, because the generic is stripped out be the TypeScript transpiler and JavaScript doesn’t understand that. To get the type at runtime we need an instance of TimeProperties. The first decision to make is the type itself. It must be class
, because we need a runtime instance. An interface would not work here. So let’s get the class:
Then, we need to configure the base class. This involves two steps. First, the assignment of the generic type to the property data. This is primarily for a convenient access. Second, a method that retrieves the properties of the TimeProperties class and returns them as an array that observedAttributes
can handle. The difficult part here is, and that’s the point where the coding can going to be a bit weird, that this method is static while on the component we work with an instance. Static members are being initialized before instance members, and especially before the constructor get called. Unfortunately TypeScript does not really have an elegant way to provide this, so we mix in some JavaScript here. The result is a base class as shown next:
The class is abstract
to enforces the implementation. It’s generic as we planned it. It extends the usual HTMLElement
class to make a real Web component. The crucial part is the call to get the array of properties: return (this.constructor as any)._keys
(line 17). Here we access the constructor object, that’s available in the static initialization phase. But how we add the data at runtime to this property?
The trick is using a decorator. Decorators are a TypeScript feature that provides additional metadata to an object. They are static by definition and instantiated before the actual object. Technically they are just pure function calls, so we can do anything within the decorator. Decorators will become part of ECMAScript sooner or later (currently it’s experimental), but due to the polyfill the TypeScript compiler creates it’s no risk using them. The head of the class would now look like this:
The decorator Observes is defined like this:
The inner part defines where the decorator is allowed to appear. The given signature (line 5) is for a class. The class definition itself is delivered by the infrastructure through the target parameter. On that object (internally it’s a Function
object) we create a dynamic property. An instance property would go to target directly, while a static property goes to constructor. Pure JavaScript magic, by the way. The strategy has nothing to do with Web components nor TypeScript. To avoid TypeScript from complaining a helper type is created, called Type<T>. This helper defines a constructor signature to allow the code to create an actual instance (new type()). And on this actual type we can call Object.keys
to get all the property names.
You have seen that the property class has initializers for the members (datetime: string = ‘’;) That’s necessary, because otherwise the TypeScript transpiler would strip this code out to make a smaller bundle and assume that JavaScript can handle this (it cans), but here we really need values at runtime and hence the initializers enforce the existence of the properties. The actual values doesn’t matter, as long as you don’t need any defaults.
Summary
That might sound complicated and seems to contradict the simplicity of easy to use Web components. But the effort to create a base class is only a one time task and its usage is a lot more easier afterwards.
Figure 2-6 shows the example with a typed base interface from the type library and additional comments on the property year. The editor is now able to help a lot while selecting the right property. That’s the main reason for the effort, because on the long term it will increase the code quality.
3 Shadow DOM
The Shadow DOM brings encapsulation. It allows a component to have its very own DOM tree, that can’t be accidentally accessed from the main document, may have local style rules, and more. When creating a new component, the component’s developer doesn’t need to know anything about the application this particular component is running in. That further simplifies the development.
3.1 Preparation
To recap some of the facts shown here, it’s recommended to have the Chrome browser available. To deal with the Shadow DOM, just activate the appropriate feature (see Figure 3-1) in Dev Tools settings (F12) and you’re good to go.
3.2 Built-in Shadow DOM
Complex browser controls are created and styled internally in different ways. Let’s have a look into <input type="range">
as an example. The browser uses DOM/CSS internally to draw them. That DOM structure is normally hidden from the developer, but we can see it in the developer tools with the Shadow DOM option enabled as mentioned before.
Then <input type="range">
looks like shown in Figure 3-2:
What you see under #shadow-root
is called “shadow DOM”. It’s a piece of completely isolated code, made with the standard techniques like HTML and CSS. We can’t get built-in shadow DOM elements by regular JavaScript calls or selectors. These are not regular children, but a powerful encapsulation technique.
In the example above, we can see an useful attribute -webkit-slider-runnable-track. It’s non-standard, in fact, it exists for historical reasons. We can use it style subelements with CSS like this:
Once again, this is a non-standard attribute. It’s specific to browsers using the Chromium engine. But a similar structure can be expected in all other engines, and sometimes it helps to achieve weird requirements.
Here, it’s just a primer to show that there is more under the hood. The Shadow DOM of a Web component is a way to work with encapsulation in a well defined way.
3.3 Shadow Tree
A DOM element can have two types of DOM sub-trees:
- Light tree: a regular DOM sub-tree, made of HTML children. All sub-trees that we’ve seen so far were “light”.
- Shadow tree: a hidden DOM sub-tree, not reflected in HTML, hidden from users eyes.
If an element has both, then the browser renders only the shadow tree. But we can setup a kind of composition between shadow and light trees as well. Some more details are explained in the chapter Slots.
Terms
There are a few terms here you should know:
- Shadow Host: The host component
- Shadow Root: The root of the partial tree that forms the shadow tree
- Shadow DOM: An isolated DOM that contains the content of the tree
- Shadow Boundary: The border around the whole thing, includes root and tree
The relations between these parts are shown in Figure 3-3.
The access to the inner DOM is the same as for the regular DOM, that means you can use the same methods to manipulate the content. The methods might return different values and access different parts of the page (or nothing at all), though.
Using Shadow Trees
The shadow tree can be used in custom elements to hide component internals and apply component-local styles. For example, this <show-hello>
element hides its internal DOM in a shadow tree:
Figure 3-4 shows how the resulting DOM looks in the Chrome dev tools. All the content is placed under “#shadow-root (open)” tag:
The call to this.attachShadow({mode: …})
creates a shadow tree. The options are “open” and “closed”, I’ll explain this shortly.
Limitations
There are two limitations you have to consider:
- We can create only one shadow root per component.
- The component must be either a custom element or derive from one of these:
-
<article>
, represented through the API class HTMLArticleElement -
<aside>
, represented through the API class HTMLAsideElement -
<blockquote>
, represented through the API class HTMLBlockquoteElement -
<body>
, represented through the API class HTMLBodyElement -
<div>
, represented through the API class HTMLDivElement -
<footer>
, represented through the API class HTMLFooterElement -
<h1…h6>
, represented through the API class HTMLHeadElement -
<header>
, represented through the API class HTMLHeaderElement -
<main>
, represented through the API class HTMLMainElement -
<nav>
, represented through the API class HTMLNavElement -
<p>
, represented through the API class HTMLParagraphElement -
<section>
, represented through the API class HTMLSectionElement -
<span>
, represented through the API class HTMLSpanElement
Other elements, like <img>
, can’t host a shadow tree. The basic rule is, that the element must be able to host some content at all.
Modes
The mode option sets the encapsulation level. It must have any of two values:
- “open” – the shadow root is available as
this.shadowRoot
. Any code (JavaScript) is able to access the shadow tree of the element. - “closed” –
this.shadowRoot
is always null, and there is no access through code (sort of total isolation).
We can only access the shadow DOM by the reference returned by attachShadow
(and probably hidden inside a class). Browser-native shadow trees, such as <input type="range">
, are closed. There’s no way to access them.
The shadow root, returned by attachShadow
, is like an element. We can use innerHTML
or DOM methods, such as append
, to populate it. In fact, the @nyaf thin library code uses innerHTML
to assign the rendered content to the web component. As simple as it sounds as simple it is.
The element with a shadow root is called a “shadow tree host”, and is available as the shadow root host property. This will work only in “open” mode
If you use this in a base class, it’s easy and powerful to copy data from host to shadow and back.
3.4 Encapsulation
The Shadow DOM is strongly delimited from the main document. Shadow DOM elements are not visible to querySelector
from the light DOM. In particular, Shadow DOM elements may have identifiers that conflict with those in the light DOM. They must be unique only within the shadow tree. Also, the shadow DOM has own stylesheets. Style rules from the outer DOM don’t get applied, at least not directly. There are pseudo classes that help here to apply externally provided styles. You can find more about this in the chapter Styling.
An example shows how it works directly. First, a global styles is being created:
Imagine a document, that contains that style and a web component definition:
Three effects can be recognized here:
- The style from the document does not affect the shadow tree. The color is not red.
- The style from the inside works. The element is bold.
- To get elements in shadow tree, we must query from inside the tree (elem.shadowRoot).
In the example I use length
to check for elements. If there is nothing the value would be ‘0’ and at runtime JavaScript treats this as false
.
Shadow DOM without Components
Just as a side step it’s worth to mention that using Web components is not a condition for using the shadow DOM. You can create a shadow DOM on-the-fly without using Web components. Assume this regular HTML element:
Add some code to see how it creates the shadow DOM:
You now have an existing element upgraded with piece of isolated DOM and some content hidden form the rest of page (result in Figure 3-5).
Closing the Shadow Root
In all previous examples and in most examples in this book we use the open mode to attach a shadow root. If you really not need access to the root and have nothing to apply programmatically, consider closing the root by using closed.
In that case the element.shadowRoot
returns null
and obviously you can do nothing with it.
3.5 The Shadow Root API
The ShadowRoot
interface of the Shadow DOM API is the root node of a DOM subtree that is rendered separately from a document’s main DOM tree. This is what you get with element.shadowRoot
.
Properties
Some properties give access to the internal parts of a component.
-
delegatesFocus
: a readonly property, that returns a boolean that indicates whetherdelegatesFocus
was set when the shadow was attached. -
host
: a readonly property, that returns a reference to the DOM element the shadow root is attached to. -
innerHTML
: Sets or returns a reference to the DOM tree inside the shadow root. -
mode
: a readonly property, that returns the mode of the shadow root — either open or closed. This defines whether or not the shadow root’s internal features are accessible from JavaScript.
The ShadowRoot
interface includes the following properties defined on the DocumentOrShadowRoot
mixin. Note that this is currently only implemented by Chrome. Other browsers make this available in the Document
.
-
activeElement
: a readonly property, that returns the Element within the shadow tree that has focus. -
styleSheets
: a readonly property, that returns aStyleSheetList
ofCSSStyleSheet
objects for stylesheets explicitly linked into, or embedded in a document.
Methods
Some methods extend this API.
-
getSelection()
: A method that returns aSelection
object representing the range of text selected by the user, or the current position of the caret. -
elementFromPoint()
: A method that returns the topmost element at the specified coordinates. -
elementsFromPoint()
: A method that returns an array of all elements at the specified coordinates. -
caretPositionFromPoint()
: A method that returns aCaretPosition
object containing the DOM node containing the caret, and caret’s character offset within that node. The caret is the blinking point where the user starts typing.
Similar incompatibilities like for the properties appear when accessing these methods. This depends on browser version and manufacturer. Because the situation changes with each new version, it’s hard to give a clear advise here. Best is to first define what browsers with what version you need to support. Then have a look on MDN (Mozilla Developer Network) to look up any support issues and seek a polyfill to help solving compatibility issues.
3.6 Summary
In this chapter I covered the shadow DOM, the isolated inner part of a Web component. Several API calls are available to deal with the shadow DOM and it’s root element. You could also find examples how the shadow DOM looks like in a browsers development tool.
4 Events
The idea behind the shadow tree is to encapsulate internal implementation details of a component. That requires to expose events explicitly if you still want to interact with inner parts of a component.
Let’s say, a click event happens inside a shadow DOM of the <user-card>
component. But scripts in the main document have no idea about the shadow DOM internals. So, to keep the details encapsulated, the browser has to re-targets the event. Events that happen in shadow DOM have the host element as the target, when caught outside of the component.
4.1 Events in ECMAScript
Before you deal with custom events you should have a basic understanding of the event schema in JavaScript.
Event Handlers
On the occurrence of an event, the application executes a set of related tasks. The block of code that achieves this purpose is called the event handler. Every HTML element has a set of events associated with it. We can define how the events will be processed in JavaScript by using event handlers. Sometimes the handler appears as a callback, a function that’s provided as a parameter. You can read the callback as the technical solution to create an event handler.
Assign a Handler
To assign a handler you have two options. First, an attribute on the HTML element. In that case the name starts with an on. For instance, the ‘click’ event is called onclick. Second, you can attach an event to an element object, using the HTML 5 API. In that case the pure name is used; ‘click’ remains ‘click’. Because we code Web components and deal with them as objects, you will usually use the second method exclusively. There is also a combination of both methods possible, where you assign the handler function to an event property. And these event properties are the same as the attribute (with an on prefix).
Choose the Right Events
While the handling is not that difficult, choosing the right event is much harder. Of course, just using click sounds easy. But the sheer number of events is frightening.
HTML 5 Standard Events
The standard HTML 5 events are listed in the following table for your reference. The script indicates a JavaScript function to be executed against that event.
Attribute[^evt] | Realm | Description |
---|---|---|
offline | document | Triggers when the document goes offline |
abort | document | Triggers on an abort event |
afterprint | document | Triggers after the document is printed |
beforeonload | document | Triggers before the document load |
beforeprint | document | Triggers before the document is printed |
blur | input | Triggers when the window loses focus |
canplay | media | Triggers when the media can start play, but might have to stop for buffering |
canplaythrough | media | Triggers when the media can be played to the end, without stopping for buffering |
change | input | Triggers when an element changes |
click | common | Triggers on a mouse click |
contextmenu | common | Triggers when a context menu is triggered |
dblclick | common | Triggers on a mouse double-click |
drag | dragdrop | Triggers when an element is dragged |
dragend | dragdrop | Triggers at the end of a drag operation |
dragenter | dragdrop | Triggers when an element has been dragged to a valid drop target |
dragleave | dragdrop | Triggers when an element leaves a valid drop target |
dragover | dragdrop | Triggers when an element is being dragged over a valid drop target |
dragstart | dragdrop | Triggers at the start of a drag operation |
drop | dragdrop | Triggers when the dragged element is being dropped |
durationchange | media | Triggers when the length of the media is changed |
emptied | media | Triggers when a media resource element suddenly becomes empty |
ended | media | Triggers when the media has reached the end |
error | document | Triggers when an error occurs |
focus | input | Triggers when the window gets focus |
formchange | input | Triggers when a form changes |
forminput | input | Triggers when a form gets user input |
haschange | document | Triggers when the document has changed |
input | input | Triggers when an element gets user input |
invalid | input | Triggers when an element is invalid |
keydown | input | Triggers when a key is pressed |
keypress | input | Triggers when a key is pressed and released |
keyup | input | Triggers when a key is released |
load | document | Triggers when the document loads |
loadeddata | media | Triggers when media data is loaded |
loadedmetadata | media | Triggers when the duration and other media data of a media element is loaded |
loadstart | media | Triggers when the browser starts to load the media data |
message | document | Triggers when the message is triggered |
mousedown | common | Triggers when a mouse button is pressed |
mousemove | common | Triggers when the mouse pointer moves |
mouseout | common | Triggers when the mouse pointer moves out of an element |
mouseover | common | Triggers when the mouse pointer moves over an element |
mouseup | common | Triggers when a mouse button is released |
mousewheel | common | Triggers when the mouse wheel is being rotated |
offline | document | Triggers when the document goes offline |
online | document | Triggers when the document comes online |
pagehide | document | Triggers when the window is hidden |
pageshow | document | Triggers when the window becomes visible |
pause | media | Triggers when the media data is paused |
play | media | Triggers when the media data is going to start playing |
playing | media | Triggers when the media data has start playing |
popstate | document | Triggers when the window’s history changes |
progress | media | Triggers when the browser is fetching the media data |
ratechange | media | Triggers when the media data’s playing rate has changed |
readystatechange | document | Triggers when the ready-state changes |
redo | input | Triggers when the document performs a redo |
resize | document | Triggers when the window is resized |
scroll | common | Triggers when an element’s scrollbar is being scrolled |
seeked | media | Triggers when a media element’s seeking attribute is no longer true, and the seeking has ended |
seeking | media | Triggers when a media element’s seeking attribute is true, and the seeking has begun |
select | common | Triggers when an element is selected |
stalled | media | Triggers when there is an error in fetching media data |
storage | document | Triggers when a document loads |
submit | input | Triggers when a form is submitted |
suspend | media | Triggers when the browser has been fetching media data, but stopped before the entire media file was fetched |
timeupdate | media | Triggers when the media changes its playing position |
undo | input | Triggers when a document performs an undo |
unload | document | Triggers when the user leaves the document |
volumechange | media | Triggers when the media changes the volume, also when the volume is set to “mute” |
waiting | media | Triggers when the media has stopped playing, but is expected to resume |
That list is probably not entirely complete, but shows the amazing number of events available. The second column is the category an event belongs to.
Event Bubbling
Event bubbling is a strange term but nonetheless very important. You can view an HTML page as a stack of layers. Each level of the document tree forms such a layer. If you have a document, and within this document is a <div>
tag, than the div is on top of the document’s body. Than the lower layer is body and the upper layer is div. This three dimensional view of the otherwise flat document is helpful to understand the event handling. Assume that the user is clicking with the mouse onto the div element. We skip the operating system stuff here and capture only events already assigned to the browser’s window.
To get the bubbling right, it’s important to understand that the mouse event comes from the top. So it hits the div first. If there is a handler attached, it can be handled. Otherwise the event is forwarded to the next layer, in our example the document itself. This is called bubbling, because it looks like bubbles in bottle moving upwards until they hit something and crash. In the browser the bubbling goes even further:
Target -> Body -> HTML -> Document -> Window
If you have many elements on the page that form a deep hierarchy, then the bubbles have a long way through all of them. But this behavior is something you can control. The way upwards is technically called propagation. In the API of the event handler you get an event object. This object can be used to change the behavior by calling the stopPropagation
method as shown here:
The Event Object
The event object provides a lot information about the event. For example, the mouse events deliver the clicked mouse button and coordinates. The key events will obviously provide the actual key the user hit. Among these simple properties there are a few subtle ones.
Because the propagation may let bubble the event, it’s not entirely clear whether we are on the target directly or somewhere up the chain. That’s the reason we can capture the actual element using event.target
. That’s the element that received the click or whatever. But the handler could be somewhere up (in direction of the document). This element is delivered using event.currentTarget
. Imagine a list (<ul><li></ul>
). Instead of attaching a handler to each list item, it’s much easier to just attach one to the whole list (ul). If the user clicks an item, the target
is the li element, while currentTarget
is ul. Though, quite often both properties deliver the same element.
After the bubbling phase the capturing phase starts. That means, each element on the bubble way will be informed that the event has been handled.
Stop Other Handlers
If an element has multiple event handlers on a single event, then even if one of them stops the bubbling, the other ones still execute. In other words, event.stopPropagation
stops the move upwards, but on the current element all other handlers will run. To stop the bubbling and prevent handlers on the current element from running, there’s the method event.stopImmediatePropagation
. After it no other handlers execute.
Other Types of Propagation
Event capturing is another type of event propagation. It is basically the way back to the event’s source.
Event Capturing
To turn event capturing on, pass true
as the third argument to the addEventListener
method.
This type of propagation is rarely used. Instead of working from inner to outer it flips the direction and goes from outer to inner. Here is the hierarchy.
Window -> Document -> HTML -> Body -> Target
Internally the capturing phase precedes the bubbling phase. That’s sort of logical, because the operating system has no idea of the internal structure of your document. So it sends the mouse click to the browser window. That’s where the capture phase starts, silently and in the background until it hit’s an element that has a handler attached. Then the bubbling phase begins. That’s where we get aware of the event and start dealing with it.
Removing Handlers
To remove a handler just call event.removeEventListener
with the same parameters you used for adding it. This can be done only in code, there is no way to remove a handler added in markup.
Multiple Handlers
You can attach multiple handlers and they execute in the order you assigned them. This can be done only in code, there is no way to add more than one handler in markup.
Stop Default Behavior
Several elements react to events out-of-the-box. For example, an anchor element will follow the href attribute on a mouse click. If you attached a handler, this private handler will execute first, but the internal behavior will happen afterwards. That can be very annoying if you have the handler to prevent this. To suppress this behavior, you call a method preventDefault()
. You may have seen returning false
from the event handler to achieve the same. But that’s an exception and is designed to support this behavior with the on[event] syntax. A handler attached directly in HTML doesn’t receive an event object. Without that, you wouldn’t be able to prevent anything. But, you can return false
as a solution. In all other situations the return value is ignored.
Follow Up Events
Some events form chains. Let’s take keypress
as an example. That’s a full key cycle. But before this event comes, you can receive a keydown
and a keyup
. A similar thing happens for click
, that starts with mousedown
, followed by mouseup
. It’s important for the infrastructure to work that way, because the information is needed to detect, for example, a dblclick
.
Passive Events
The optional passive: true
option of addEventListener
signals the browser that the handler is not going to call preventDefault()
. That’s needed because there are some events like touchmove
on mobile devices (when the user moves their finger across the screen), that cause scrolling by default, but that scrolling can be prevented using preventDefault()
. When the browser detects such an event, it has first to process all handlers, and then if preventDefault
is not called anywhere, it can proceed with scrolling. That may cause unnecessary delays in the UI. The option tells the browser that the handler is not going to cancel scrolling. Then browser scrolls immediately providing a maximally fluent experience, and the event is handled later on. Passive is true
by default for touchstart
and touchmove
on most browsers.
Document Handlers
It’s quite often a good idea and sometimes it makes your life really easy, when you stop thinking in adding endless chains of handlers to a growing number of elements. Because the event bubbles anyway, it’s a solution to add few handlers (mouse, key, submit) to the document and just check the e.target
property whether you got the right one. A clever approach is to use the dataSet
property, reflected in HTML as data-
attributes. Say, you want a button click handle something using a global handler:
The event in this example is added to the document
. Because it’s the underlying layer, it will receive all unhandled and propagated events. With just one handler you can handle all events globally. That could help write code that’s better to maintain.
4.2 Events in Web Components
Listing 4-2 shows a simple example with an event handler attached on the shadow DOM:
If you click on the button, the messages are:
- Inner target: BUTTON – internal event handler gets the correct target, the element inside shadow DOM.
- Outer target: USER-CARD – document event handler gets shadow host as the target.
Event re-targeting is a great thing to have, because the outer document doesn’t have to know about component internals. From its point of view, the event happened on <user-card>
.
Events and Slots
Re-targeting does not occur if the event occurs on a slotted element, that physically lives in the light DOM. In the chapter about slots you can find more details regarding slot behavior. For example, if a user clicks on <span slot="username">
in the example below, the event target is exactly this span element, for both shadow and light handlers (Listing 4-3).
If a click happens on “Joerg Krause”, for both inner and outer handlers the target is <span slot="username">
. That’s an element from the light DOM, so no re-targeting.
On the other hand, if the click occurs on an element originating from shadow DOM, e.g. on <b>Name:</b>
, then, as it bubbles out of the shadow DOM, its event.target
property is reset to <user-card>
.
Event Bubbling
For purposes of event bubbling, flattened DOM is used. So, if we have a slotted element, and an event occurs somewhere inside it, then it bubbles up to the <slot>
and upwards.
The full path to the original event target, with all the shadow elements, can be obtained using event.composedPath()
. As we can see from the name of the method, that path is taken after the composition.
In the example above, the flattened DOM is looking like this:
So, for a click on <span slot="username">
, a call to event.composedPath()
returns an array:
That’s exactly the parent chain from the target element in the flattened DOM, after the composition.
That’s a similar principle as for other methods that work with shadow DOM. Internals of closed trees are completely hidden.
Composed Events
Most events successfully bubble through a shadow DOM boundary. There are few events that do not.
This is governed by the composed event object property. If it’s true, then the event does cross the boundary. Otherwise, it only can be caught from inside the shadow DOM.
If you take a look at UI Events specification, most events have composed: true:
-
blur
,focus
,focusin
,focusout
-
click
,dblclick
-
mousedown
,mouseup
,mousemove
,mouseout
,mouseover
wheel
-
beforeinput
,input
,keydown
,keyup
All touch events and pointer events also have composed: true.
There are some events that have composed: false though:
-
mouseenter
,mouseleave
(they do not bubble at all) -
load
,unload
,abort
,error
select
slotchange
These events can be caught only on elements within the same DOM, where the event target resides.
4.3 Custom Events
When we dispatch custom events, we need to set both bubbles and composed properties to true for it to bubble up and out of the component.
In Listing 4-4 I create div#inner in the shadow DOM of div#outer and trigger two events on it. Only the one with composed: true makes it outside to the document.
The structure internally looks like this:
The dispatchEvent API
In the last example I used the dispatchEvent
API. It dispatches an event on a target. The listeners are invoked synchronously in there appropriate order. The normal event processing rules apply. An outside viewer can not distinguish between such custom events and those fired by the internal parts of the document. The “event” itself is described by an interface and exists as an instantiable class with the same name. If you work with TypeScript, you have the type and can make instances like this:
The options dictionary is of type EventInit
, with just the three already mentioned properties:
-
bubbles
: An optional Boolean indicating whether the event bubbles. The default isfalse
. -
cancelable
: An optional Boolean indicating whether the event can be cancelled. The default isfalse
. -
composed
: An optional Boolean indicating whether the event will trigger listeners outside of a shadow root. The default isfalse
.
In TypeScript the definition looks like this:
Customize Events
Apart from the common Event
interface there is another type you can use: CustomEvent
. Despite the name you don’t need to use it to fire a custom event, but it’s often helpful to get a clearer information about the nature of the event. The only difference is that CustomEvent
provides abn additional property called detail
. This is an object you define on the source and the receiver can get custom data here. The sheer existence clarifies the custom nature of the event. The option is part of the initializer, now named CustomEventInit
.
The CustomEventInit
type accepts all properties from EventInit
, too.
In TypeScript the definition looks like this:
This provides both, a type definition and a constructor description.
4.4 Smart Events
Adding events requires script work. To make it easier to use, some global code could be helpful. However, this doesn’t change the basic behavior and flow as described before. Events are defined by a special instruction. They are attached to document
object, regardless the usage.
Events are easy to add directly using a dataset like data-onclick
. All JavaScript events are supported that way. Just replace ‘onclick’ in the example with any other JavaScript event.
Now, on an applications global start script (see Listing 4-5), attach handlers to anything with such an event definition.
{ format: js, caption: “Listing 4-5: Smart Events (chapter4/smart/index.html)”
The effect here is – depending on the number of such events – to drastic reduce the amount of code for attaching events. However, it’s not that easy to add similar removeEventHandler
calls. The code is more appropriate for a single page app, where the final state of the code is static and held in memory anyway.
4.5 Summary
In this chapter I explained the event handling in the browser, the way we attach events to normal and shadowed web components and how to extend the event system. Using custom events the way component communicate to each other can be easily extended. Some TypeScript definitions show how the objects are built internally. Attaching events globally using the document
object shows finally how to minimize the effort to attach mulitple events.
5 Templates
The concept of templates is a fundamental part of almost all web development environments. Examples for server side template languages are something like Razor (.NET), Haml (Ruby), Django (Python), Pug (NodeJS) and Smarty (PHP). Examples for client side template languages can be found in Angular and many more frameworks.
Templates help creating dynamic parts, reduce boilerplate code, avoid repeating markup. The raise of so many template variants in Web frameworks, client and server side, was forced by a missing alternative in HTML. That changed dramatically with the WhatWG HTML Template Specification. HTML Templates still struggle to be widely accepted, but with the usage in Web components they find their way back into the light.
5.1 HTML 5 Templates
A built-in <template>
element serves as a storage for HTML markup templates. The browser ignores it contents, only checks for syntax validity, but we can access and use it in JavaScript, to create or enhance other elements. All modern browsers support this, but to ensure it’s really full supported you may consider a small test.
How it Works
In theory, we could create any invisible element somewhere in HTML for HTML markup storage purposes. The special thing about <template>
is the nature of being cloneable. The content can be any valid HTML, even if it normally requires a proper enclosing tag.
For example, we can put there a table row <tr>
:
Usually, if we try to put <tr>
inside, say, a <div>
, the browser detects the invalid DOM structure and “fixes” it, adds <table>
around. That’s not what we want. On the other hand, <template>
keeps exactly what we place there.
We can put styles and scripts into <template>
as well:
The browser considers <template>
content “out of the document”. Hence, styles are not applied, scripts are not executed, autoplay
of a video
element is not run, and much more. Technically it’s inert until activated. The content becomes live (styles apply, scripts run and so on), when it’s being inserted into the document. Also the access from outside, using querySelector
or other API calls, will not see the template’s content.
You may ask where to place the <template>
element, if it isn’t part of the document anyway. The answer is: It doesn’t matter. It may appear in the <head>
element or somewhere in the body among other elements. It depends on the actual content, where it makes more sense. Global templates might be better placed in the head, while a row template for some table is probably easier to handle within the table itself.
5.2 Activating a Template
The template content is available in its content property as a DocumentFragment
– a special type of DOM node. We can treat it as any other DOM node, except one special property. We don’t insert the template itself, but instead its children, available through the property content
. The example in Listing 5-1 shows how to use.
The interesting part here is, that you doesn’t need to select the template. Based on it’s id property it’s already available as a global property. That means tmpl.content
is available after the browser has parsed the document and there is no need to query the element explicitly. If you would query (document.querySelector('#tmpl')
) the result would be exactly the same. The result is shown in Figure 5-1.
Clone or Import
There are several methods to clone or import nodes. You need a deep copy, but apart from this it’s really up to you. One method is importNode
while the other is cloneNode
. Historically the importNode
method was made to copy content from one document to the other. You may see the template with its document fragment as such a node source and the actual document as the node sink. But technically it’s a clone operation and here the cloneNode
method seems more appropriate. However, it’s academical, because both method lead to exactly the same result. Modern browsers doesn’t distinguish here anymore in relation to templates. However, of you read out the ownerDocument
property, it could have a different value when using importNode
. Let’s rewrite the last example to show the difference:
I, personally, found the cloneNode
way more intuitiv and easier to read. But it may depend on the real code whether other options suit better.
5.3 Templates and Web Components
Templates play a crucial role in Web components. It’s a fundamental part to create powerful components. The main purpose is to handle the Shadow DOM properly, either as part of a component or somewhere directly in the DOM.
Shadow DOM
Let’s create a Shadow DOM example using the <template>
element:
Figure 5-2 shows the result in the browser’s developer tools.
In line 10 when we clone and insert tmpl.content
, and instead its DocumentFragment
, its children <style>
, <p>
are inserted. They form the shadow DOM, then.
Shadow DOM and innerHTML
After the initial decision to use the Shadow DOM the next question is how to get content in there. If the template isn’t your concern you may end up with something like this:
That’s fine for smaller examples. But using string for HTML is a way to mess up things for sure. If you can’t switch to a template engine such as JSX or import the HTML from documents, using the <template>
element is the better way to go.
5.4 Nested Templates
Consider the example in Listing 5-6 with a template inside another template.
While this is allowed to do, the activation is not so simple. The inner template remains inert even if the outer is properly activated. You need to activate both, separately. That’s not a big effort but tricky in all the details.
Whether you work with or without Shadow DOM doesn’t matter. For the sake of clarity the example in Listing 5-7 goes straight. First, the outer template is pulled using the magic property section that corresponds to the templates id property (line 2). Then it’s cloned. The clone has a collection of children, among them is the inner template. Even here you can use the magic property, called details according to the inner template’s id (line 3). Because the deep clone with cloneNode
will also clone the inner template we are going to remove it (line 4). It’s not really disturbing, but, you know, we love clean code. Hence we clone the inner part, add it to the outer clone (line 5) and attach the whole construct to the real DOM (line 6).
Making inner templates invisible to the first layer allows us to keep the template structure clean and readable.
5.5 Template Styles
Styles in templates behave like any other style. A style node can be copied like any other node. But you can also access the host element by using the pseudo selector :host
. More about this can be found in the chapter Shadow DOM Styling.
Apply Global Styles
The following example takes care of the template behavior. It uses the template element if needed to create a shadow DOM. It’s the regular creation of a shadowed web component using a separate method. It’s not complete for the sake of brevity, but it shows the idea.
The property this.copyStyles provides a Boolean value to control the behavior. Assume it’s an observed attribute to control a components behavior from the usage side. If it’s true the setup code create a style element and copies some prepared styles into this. That works even with plain text. The property this.globalStyles is this source. Either it’s provided as an attribute, too. Or you setup some code in the web components constructor to copy all global styles in one step. That would bring both, isolation and global style access. Not always the ideal solution, but often a quick win for complex CSS frameworks.
Copying global styles could look like this:
Place this in the component’s constructor. If you do this for multiple documents consider making the property this.globalStyles static, check for already copied styles and skip the code if it’s already there. Then the first component enhanced in such a way pulls the styles and all others in your document benefit silently.
5.6 Summary
In this chapter you learned about templates and how you can use them with or without web components. I also captured the usagwe of templates with slots, nested templates, and how to deal with styles.
6 Slots
A slot is a placeholder that users can fill with their own markup. The slot may exist outside a Web component or inside, in conjunction with a template or Shadow DOM (or both).
6.1 Slots Explained
Many types of components, such as tabs, menus, image galleries, and so on, need content to render properly.
Just like a browser’s built-in element <select>
expects <option>
items, a <custom-tabs>
may expect the actual tab content to be passed. And a <custom-menu>
may expect menu items.
The code that makes use of <custom-menu>
could look like this:
Then our component should render it properly, as a nice menu with given title and items, handle menu events, etc.
Slot and Templates
Thw following example shows a shadowed template with some neat styling.
The idea was here to provide some initial instruction to make the template more dynamic. The slot is some kind of parameter here: <slot name="p"></slot>
(line 42).
The name attribute is a reference to the element that has a slot attribute with that name. That’s the way to get external information in the template at runtime. The result is shown in Figure 6-1.
Shadow DOM
The Shadow DOM supports <slot>
elements, that are automatically filled by the content from light DOM. The above example is already “shadowed”, but that’s just an option. There is no need for the slots to use a Shadow DOM.
6.2 Slots and Components
Let’s see how slots work on a simple example with Web Components. Here, the <user-card>
shadow DOM provides two slots, filled from light DOM:
Then the browser performs “composition”: it takes elements from the light DOM and renders them in corresponding slots of the shadow DOM. At the end, we have exactly what we want – a component that can be filled with data.
Figure 6-2 shows the DOM structure after the script, not taking composition into account.
The shadow DOM is under #shadow-root
. For rendering purposes, for each <slot name="...">
in shadow DOM, the browser looks for slot=”…” with the same name in the light DOM. These elements are rendered inside the slots. The flattened DOM exists only for rendering and event-handling purposes. It’s kind of “virtual”. That’s how things are shown. But the nodes in the document are actually not moved around!
The last proposition can be easily checked if we run querySelectorAll
. All the nodes are still at their places. The example shows that the light DOM <span>
nodes are still at the same place, under <user-card>
. Check it by executing this piece of code:
So, the flattened DOM is derived from shadow DOM by inserting slots. The browser renders it and uses it for style inheritance, and event propagation. But JavaScript’s DOM API still sees the document “as is”, before flattening.
6.3 Slot Behavior
In this section I go a little deeper in the specific behaviors of slots.
Slot Positions
Only top-level children may have slot=”…” attribute. The slot=”…” attribute is only valid for direct children of the shadow host (in our example, <user-card>
element). For nested elements it’s ignored.
In the example shown in Listing 6-3 the second <span>
here is ignored (as it’s not a top-level child of <user-card>
).
Multiple Slots
If there are multiple elements in light DOM with the same slot name, they are appended into the slot, one after another. The next example shows this and makes use of a list created by repeating slots.
However, the user must know that slot’s provided as <li>
are appropriate here. That’s some sort of context knowledge that contradicts the abstraction idea behind templates and slots. To avoid this, additional code in the component is required. Also, the very primitive way to work with innerHTML
is obviously not the best idea.
6.4 Slot Fallback Content
If we put something inside a <slot>
, it becomes the fallback, “default” content. The browser shows it if there’s no corresponding filler in light DOM.
Listing 6-5 shows in this piece of shadow DOM, that it renders anonymous if there’s no slot=”username” in light DOM.
The <user-card>
element is empty, so all slot content falls back to default text provided in the slots’ definitions.
Figure 6-4 shows the outcome in the browser and debug view.
6.5 Default Slots
The first <slot>
in shadow DOM that doesn’t have a name is a “default” slot. It gets all nodes from the light DOM that aren’t slotted elsewhere.
For example, let’s add the default slot to our <user-card>
that shows all unslotted information about the user:
All the unslotted light DOM content gets into the “Other information” fieldset (line 20).
Elements are appended to a slot one after another (see Figure 6-5), so both unslotted pieces of information are in the default slot together. The named slots are stripped out and placed where the placeholders are as before.
6.6 Slot Events
Now let’s go back to the element<custom-menu>
, mentioned at the beginning of this chapter. We can use slots to distribute menu items. Here’s the markup for <custom-menu>
:
That’s much better than the generic <li>
in the slot elements. It requires, however, an additional component. The code has now two components:
The slots’ content is not further abstracted, instead, it’s pulled directly as text using the textContent
property.
Add Event Handler
As it is a menu, we need to add as a last step adding event handlers. That’s not so much different from regular HTML, with just one exception. Attached event handlers are not copied in the clone process. Because slots need templates and templates need clone, it means that we must attach the events in the component and expose the event.
To expose custom events we use the API call dispatchEvent
like this:
The event name is your personal choice. It’s as customizable as any name. If you want to transfer custom data the class CustomEvent
is better than just using Event
. This type provides an additional property detail
. The receiving component must also access the content of the slot, not the actual definition. The complete example is written in TypeScript. Due to the types it gives a better understanding.
In the event receiver the slot is read by querySelector
and the slot’s selector (line 43). This returns a HTMLSlotElement
instance. This is the same as HTMLElement
with just one exception: the method assignedNodes
. That’s the way to access the projected content – the elements that fire the actual event. For all the nodes we attach an event handler that receives the custom event.
Custom events work exactly like the standard events, but they provide an additional field detail
, that can be of type any
or a type enforced by a generic. To fire a custom event properly, the type CustomEventInit
is the right way (line 14 to 17).
6.7 Updating Slots
Let’s continue with the menu example. What if the outer code needs to add or remove menu items dynamically? The manipulation works as with any other element and goes directly into the DOM. Assume you have a single button on the page, the code in lines 6 to 9 would add more items and re-render the component immediately.
The components are the same as in the previous example.
Slot Change Events
If you want to monitor the changes, the API provides a special event, slotchange
here. It fires one more than your actions, as it will capture also the initializing phase.
If we’d like to track internal modifications of light DOM from JavaScript, that’s also possible using a more generic mechanism, the MutationObserver
.
6.8 The Slot API
Finally, let’s look into the slot-related JavaScript methods. As we’ve seen before, JavaScript looks at the “real” DOM, without flattening. But, if the shadow tree has {mode: 'open'}
we can figure out which elements are assigned to a slot and, vise-versa, the slot itself by the elements inside it:
-
node.assignedSlot
: returns the<slot>
element that the node is assigned to. -
slot.assignedNodes({flatten: true/false})
: DOM nodes, assigned to the slot. The flatten option is false by default. If explicitly set to true, then it looks more deeply into the flattened DOM, returning nested slots in case of nested components and the fallback content if no node assigned. -
slot.assignedElements({flatten: true/false})
: DOM elements, assigned to the slot (same as above, but only element nodes).
These methods are useful when we not just need to show the slotted content, but also track it in JavaScript. For example, if the <custom-menu>
component wants to know what it shows, then it could track slotchange
and get the items from slot.assignedElements
like this:
This is from the previously shown class CustomMenu. First, the event source is the element where the slots’ content is assigned to. Here we capture the change (<ul>
). The sender is the slot itself of type HTMLSlotElement
. Using the method assignedElements
we can get access to the actual element after the change happened. The rest of the code is just for demonstration. It retrieves the content and makes a visible output into a <div>
element.
6.9 Summary
In this chapter we captured the <slot>
element and how to use it to parameterize templates. Some examples in JavaScript and TypeScript show the power of the underlying API, dealing with slot instances and handling slot specific events.
7 Components and Styles
Due to the fact that the DOM might be isolated, the styles are isolated too. The advantage is primarily the ability to use styles without knowing and disturbing any globally assigned styles. The disadvantage might be the reduced usability of global styles.
7.1 Style Behavior
Shadow DOM may include both <style>
and <link rel="stylesheet" href="…">
tags. In the latter case, stylesheets are HTTP-cached, so they are not re-downloaded for multiple components that use same template.
As a general rule, local styles work only inside the shadow tree, and document styles work outside of it. But there are few exceptions.
Accessing the Host
The :host
selector allows to select the shadow host (the element containing the shadow tree).
For instance, we’re making a <custom-dialog>
component that shall be centered. For that we need to style the <custom-dialog>
element itself.
That’s exactly what :host
does as shown in Listing 7-1.
While the component is shadowed, the styles still apply due to the selector in line 3.
Cascading
The shadow host (<custom-dialog>
itself) resides in the light DOM, so it’s affected by document CSS rules.
If there’s a property styled both, in :host
locally, and in the document, then the document style takes precedence.
For instance, assume in the document we had a style as show here:
In that case the <custom-dialog>
component would be without padding.
It’s very convenient, as we can setup “default” component styles in its :host rule, and then easily override them in the document.
The exception is when a local property is labelled !important
, for such properties, local styles take precedence. That’s the normal CSS behavior.
Selecting a Host Element
Selecting a host element is the same as :host
, but applied only if the shadow host matches the selector.
For example, we’d like to center the <custom-dialog>
only if it has centered attribute as shown in Listing 7-2.
Now the additional centering styles are only applied to the first dialog: <custom-dialog centered>
.
That’s a smart technique, that unleashes the power of CSS on the level of custom attributes. In bigger and hence more complex applications it’s a n advantage to avoid the usage of multiple “data-“ attributes and nested classes and replace them by simple top-level attributes. However, you should try to find a balance between those techniques. Creating a style system that is very closely bound to web components might be attractive on the first look. But the further away you go from established CSS the bigger the risk is that using existing sets of style rules is almost impossible.
Accessing the Host Context Aware
There is another selector, :host-context
that brings even more control. Using :host-context(selector)
is the same as :host
, but applied only if the shadow host or any of its ancestors in the outer document matches the selector.
For example, :host-context(.dark-theme)
matches only if there’s dark-theme class on <custom-dialog>
of anywhere above it:
To summarize, we can use the “:host”-family of selectors to style the main element of the component, depending on the context. These styles (unless !important
) can be overridden by the document.
7.2 Styling Slotted Content
Now let’s consider the situation with slots. Slot’s are explained in great detail in the chapter Slots. Slotted elements come from light DOM, so they use document styles. Local styles do not affect slotted content.
In Listing 7-4, the slotted <span>
is bold, as per document style, but does not take background from the local style.
The result is bold, but not red. If we’d like to style slotted elements in our component, there are two choices.
First, we can style the <slot>
itself and rely on CSS inheritance as shown in Listing 7-5.
Here <p>Joerg Krause</p>
becomes bold, because CSS inheritance is in effect between the <slot>
and its contents. But in CSS itself not all properties are inherited.
Another option is to use the ::slotted(selector)
pseudo selector. It matches elements based on two conditions:
- It’s a slotted element, that comes from the light DOM. The slot’s name doesn’t matter. Just like any slotted element, but only the element itself, not its children.
- The element matches the selector.
In our example, ::slotted(div)
selects exactly <div slot="username">
, but not its children:
Please note, that the ::slotted
pseudo selector can’t descend any further into the slot. The following selectors are invalid:
Also, ::slotted
can only be used in CSS. We can’t use it in querySelector
to select. That’s not specific to web components, pseudo selectors cannot be used to select elements using the integrated selector API.
7.3 CSS Hooks
To style internal elements of a component from the main document you can use additional hooks. Selectors like :host
apply rules to <custom-dialog>
element or <user-card>
, but how to style shadow DOM elements inside them?
There’s no selector that can directly affect shadow DOM styles from the document. But just as we expose methods to interact with our component, we can expose CSS variables (custom CSS properties) to style it. Custom CSS properties exist on all parts, both in light and shadow DOM.
For example, in shadow DOM we can use –user-card-field-color CSS variable to style fields, and the outer document can set its value:
Then, we can declare this property in the outer document for <user-card>
:
Custom CSS properties “pierce” through shadow DOM, they are visible everywhere, so the inner .field class will make use of it. Listing 7-7 shows the full example.
Ignoring Styles
I discussed the possibility to copy global styles into the component at the beginning of this chapter. That’s a primitive technique and contradicts the isolation principle. Sometimes it would be good to to this selectively. Following the pattern of the pseudo selectors shown before we could add a “custom” pseudo selector. Browsers will simply ignore it. You can add the pseudo style :ignore
to global styles, for example:
This style will not be copied. It’s working in the browser as before, because the not-normative pseudo is being ignored. You may wonder why it works, as I said that pseudo selectors can not be used as a selection criteria. That’s true, but only for those supported by the standard. When it comes to pseudo-pseudo-selectors, they seem to work as part of the rule name. But keep an eye on this as future implementations may change the browsers’ behavior.
7.4 Parts
Shadow DOM is a specification that gives us DOM and style encapsulation. This is great for reusable web components, as it reduces the ability of these components’ styles getting accidentally stomped over, but it adds a barrier for styling and theming these components deliberately. In developer terms it’s like having namespaces for isolation but no proper import statement to use them selectively.
When styling a component, there are usually two different problems you might want to solve:
-
Styling: You want using a third-party
<fancy-button>
element on my site and want this one to be blue. -
Theming: You want using many third-party elements on a site, and some of them have a
<fancy-button>
; and all of the<fancy-button>
components have to be blue.
Let’s look into an example with properties:
The problem with using just custom properties for styling or theming is that it places the onus on the element author to basically declare every possible styleable property as a custom property.
The Part Attribute and Pseudo Selector
The current proposal of the standardization guys is ::part
(and possibly ::theme
, which is still a draft), a set of pseudo-elements that allow you to style inside a shadow tree, from outside of that shadow tree. You can specify a “styleable” part on any element in your shadow tree by using the part
attribute:
If you’re in a document that has an <my-part>
in it, then you can style those parts with with the selector like this:
You can use other pseudo elements or selectors (that were not explicitly exposed as shadow parts), so both of these work:
You cannot select inside of those parts, so this doesn’t work:
You cannot style this part more than one level up if you don’t forward it. So without any extra work, if you have an element that contains an my-part like this:
You cannot select and style the my-part component’s part like this:
Forwarding Parts
You can explicitly forward a child’s part to be styleable outside of the parent’s shadow tree with the exportparts attribute. So in the previous example, to allow the some-box part to be styleable by my-bar’s parent, it would have to be exposed:
The exportparts
forwarding syntax has several options.
With exportparts=”some-box some-input” you can explicitly forward the component’s parts that you know about (i.e. some-box and some-input) as they are. These selectors would match:
Using the syntax exportparts=”some-input: bar-input” you can explicitly forward (some) of component’s parts (i.e. some-input) but rename them. These selectors would match:
The following selectors would not match:
You can combine these, as well as add a part to the my-part component itself (some-foo as shown below. This means “style this particular my-part, but not the other one, if you had more):
Given the above prefixing rules, to style all inputs in a document at once, you need to ensure that all elements correctly forward their parts and Select all their parts.
So given this shadow tree we come to the final solution as shown in Listing 7-8.
In the browser’s debug view it would something like this:
You can style all the inputs with:
This is a lot of effort on the element author, but easy on the theme user.
If you hadn’t forwarded them with the same name and some-input was used at every level of the app (the non contrived example is just an <a> tag that’s used in many shadow roots), then you’d have to write:
This is a lot of effort on the theme user, but easy on the element author.
Both of these examples show that if an element author forgot to forward a part, then the app can’t be themed correctly.
7.5 Summary
In this chapter I covered the way we can add cascading style sheets to web components. We saw how to pierce the isolation boundary and deal with several pseudo selectors to make components styleable and themeable. Some parts of the standards are under active development currently and even those parts have beed discussed here.
8 Making Single Page Apps
A single-page application (SPA) is a web application or website that interacts with the web browser by dynamically rewriting part tof the current web page with new data from the web server, instead of the default method of the browser loading entire new pages. The goal is faster transitions that make the website feel more like a native app.
In a SPA, all necessary HTML, JavaScript, and CSS code is either retrieved by the browser with a single page load, or the appropriate resources are dynamically loaded and added to the page as necessary, usually in response to user actions. The page does not reload at any point in the process, nor does transfer control to another page, although the location hash or the HTML5 History API can be used to provide the perception and navigability of separate logical pages in the application. The history API makes the browser’s navigation buttons work properly.
8.1 Architecture SPAs
SPAs consist of several parts and layers.
8.2 The Router
Usually, when we create SPAs (Single Page Apps), we use a router. While it sounds complicated in reality it isn’t. Of course you can add tons of features, but the basic behavior is always the same. The basic function consists of two parts:
- Monitoring the navigation URL
- Define a target location for the replaceable part
Monitoring the URL
In a SPA we don’t care about the full URL. To prevent the browser from navigating away the router instruction is delivered using a hash value (‘#value’). If that hash changes, the appropriate event fires.
To monitor the URL’s hash we can use an event like this:
As you can see in Listing 8-1 the hash value determines the action. A router has usually a roter configuration that is simply a dictionary with the hash values and the components’ types.
Because we create components on-the-fly we can’t provide dynamic data for attributes. It makes sense to have some sort of common order for your project to achieve a good router.
The router will load just containers. Their whole purpose is to serve as a starting point for your business components. Such containers have no code, no attributes, and return the basic structure of an application fragment.
Configure the Router
The easiest way to configure a router is a simple dictionary. This could be dynamic, loaded from JSON, based on certain circumstances or simple an object defined in the main component. The code in Listing 8-2 shows how to register routes.
Define the target
The target is usually called an outlet. The placement of a component is a simple DOM operation. The first step is removing the probably existing component, in a second step the new component is added. The browser’s engine takes care of the render process.
To define an outlet where the components appear you can either use existing elements or a component. The example in this chapter uses another component like this:
The outlet must know when the user clicks somewhere, changing the URL and depending on that URL pulling the right component and add it to the DOM.
As shown in the previous section it’s easy to monitor the navigation URL.
8.3 Router Implementation
Now that we know the technical base it’s time to implement. The full example is written in TypeScript and for the sake of simplicity in just one file. It consists of these parts:
- A main component with navigation links
- The router outlet with routing logic
- Three demo components that deliver content
To work with this example don’t forget to transpile first using the tsc command.
The code in Listing 8-3 works when called from a simple HTML page as shown in Listing 8-4. Let’s investigate the crucial parts here. As we use TypeScript the typing is critical. The type
definition helps the transpiler to understand that the given type can be instantiated (line 1). In the router dictionary (line 32) we place the pure type objects and later create an instance by calling the new
operator. The dictionary definition is on line 48:
The event for monitoring hash changes is added in connectedCallback. It’s being removed in case you disconnect later. That’s not necessary in such a simple app, but in reality it will grow and than even the parent element could be dynamic. So we need to take care of the event handlers to avoid memory leaks. Quite often you’ll face situations where some sort of child routing is necessary and multiple outlets being targeted.
The actual component exchange consists of three steps. First, in line 72 we looking for a valid route. Consider adding a fallback here to capture wrongly constructed links. Second, the current content of the router is removed. Third, the actual type is being retrieved, instantiated (line 75) and inserted into the DOM (line 73). The browser renders it and it appears immediately.
In the beginning the outlet is empty and no page is shown. If you want to fall back to Page1Component, the following code will do the trick:
Figure 8-1 shows the result of the demo application.
8.4 The History API
The HTML5 history API gives you access to the browser navigation history via JavaScript. The HTML5 history API is really useful in single page web apps. A single page app can use the API to make a certain state in the app available for bookmarking and for navigation with respective buttons.
The History Stack
The browsing history consists of a stack of URLs. Every time the user navigates within the same website, the URL of the new page is placed at the top of the stack. When the user clicks the “back” button, the pointer in the stack is moved to the previous element on the stack. If the user clicks the “forward” button again, the pointer is moved forward to the top-most element on the stack. If the user clicks “back” and then click on a new link, the top-most element on the stack will be overwritten with the new URL.
The history Object
You access the browsing history via the history
object which is available as a global object.
The history
object contains the following functions:
back()
forward()
go(index)
pushState(stateObject, title, url)
replaceState(stateObject, title, url)
The back
function moves the browsing history back to the previous URL. Calling back
has the same effect as if the user clicked the browser’s “back” button.
The forward
function moves the browsing history forward to the next page in the history. Calling forward
has the same effect as clicking the browser’s “forward” button. This is only possible if the back
function has been called before, or if the “back” button has been clicked. If the history already points to the latest URL in the browsing history, there is nothing to move forward to.
The go(index)
function can move the history either back or forward depending on the index you pass as parameter to the function. If you call it with a negative index (e.g. go(-1)
) then the browser moves back in history. If you pass a positive index to the function then the browser moves forward in the browsing history (e.g. go(1)
). The index indicates how many steps in the history to move either forward or back in the browsing history.
The pushState(stateObject, title, url)
function pushes a new URL onto the history stack. The function takes three parameters. The url is the URL to push onto the history stack. The title parameter is mostly ignored by the browsers. The stateObject is an object that will be passed along with the event fired when a new URL is pushed onto the history stack. This stateObject can contain any data you want. It is just a JavaScript object. This function is probably the most important to use with the router, because it allows you to add states even if the internal behavior does not recognize the action made in code accordingly.
The replaceState(stateObject, title, url)
function works like the pushState
function except it replaces the current element in the history stack with a new URL. The current element is not necessarily the top element. It is the element currently being pointed to, which can be any element in the stack, if the back
, forward
and go
functions have been called on the history object.
History Change Events
The HTML5 history API enables a web page to listen for changes in the browser history. The security restrictions apply here too, so a web page will not be notified of history changes that leads to URLs outside of the domain of the web page.
To listen for changes in the browser history you set an onpopstate listener on the window object. Here is a browser history event listener example:
The onpopstate event handler function will get called every time the browser history changes within the same page (the browser history that page pushed onto the history stack). The reaction to a history change event could be to extract parameters from the URL and load the corresponding content into the page (e.g. via AJAX).
Summary
As you can see, creating a router for SPA is very easy and there is no need to work with a full blown framework for just this single task. All you need is a basic understanding of the HTML 5 API and, of course, Wev components. One thing could be little bit more challenging, though. Adding and removing components means that the instances are being unloaded and destroyed. You can see this if you put an output in the disconnectCallback methods. That means, the components are stateless. While the whole application stays in memory all the time, the finally working components are ephemeral.
What we need here to solve this is a global state. That’s the purpose of the Flux architecture – writing stateful applications. I capture this in the next section and it will be less complicated than you think.
8.5 Stateful Apps
Some times ago someone wrote that the only reason to use a full frontend framework is to the keep the application’s state. Keeping a global and central state is a very crucial part of a SPA. As you have seen in the previous section, the components are ephemeral. They load and unload as the user clicks through the applications UI. Keeping a state is necessary to avoid endless round-trips to the server. Of course, there are some APIs we can use, such as localStorage. But this would result in a deep coupling between components. Any component who wants to consume a certain state must exactly know how another component has written this value. Tight coupling is the mother of all software hell. Any change will lead to an endless chain of changes and in a usually surprisingly short period of time the software becomes a mess nobody can handle anymore.
A global state solves this by putting all values in a central space and let all components access it in a well defined way. Because there are many ways to implement such a thing and as many to use it, it would be good to have a distinct architectural pattern for this task. The architecture we use nowadays to achieve the goal of a stateful application is called Flux.
Flux
The Flux pattern is not just the state. It provides also a well defined way to handle the business logic. The components we have seen so far are just pieces of the user interface (UI) – also called views. Because it’s code you can place actual business logic in it. But that feels bad, because it violates another principle, called “separation of concerns”. Following this principle you should not mix view code (UI) with logic code. Putting the logic outside the component is easy by using services, but this would again create a tight coupling between the components and their services. Hence, one problem solved and another one increased.
The Flux architecture solves this by introducing a very smart way to handle the state changes using logic outside of components. First, let’s look onto a simple chart, shown in figure 8-2.
The main principle is the data flow. It’s always uni-directional. The data come in a specific way into the store and from there to the component. The store is not directly accessible. Such a protected store is very reliable, hence often called the single source of truth. So, what’s ever going on in your app, the store knows and all other parts can ask the store, and nobody else can change it.
The Flux Parts
To make this working, we need few parts with very clear tasks:
- Actions that define tasks (such as SEARCH, LOAD, SET, REMOVE, you name it).
- Reducers that are pure function calls that do what your business logic requires (change data, call services).
- A Store that holds all the data. The reducer can request a change of the state, but nobody else can.
In the component you have two possible options:
- Call a Dispatcher by sending actions along with an (optional) payload.
- Listen for changes from a Subscriber in the store to know when a reducer finished it’s task.
That sounds complicated and the amount of boilerplate code is significant. But the outcome is outstanding. Using a Flux model improves the code quality dramatically, the strictness and clearness of the code is astonishing and the ability to handle huge application is a big step forward.
Tell Tales
The first Flux implementation was quite complicated and soon developers ditched the proposal not seeing it as a real advantage. On top of this several store libraries appeared that had a simpler API, reduced functions or a more clever approach to handle the data. One of the well know libraries is called Redux. But there are several others. Most are independent of a certain framework, some are bound to a specific one (Redux is for React, NgRX is same for Angular, MobX is an independent one, for example).
Implementing Flux
In this chapter I’m going to show how to implement such a pattern from scratch. No library, no framework. It’s a good starter for learning and often more than enough for real life projects. It’s a few more lines of code, but worth the effort to browse through the lines.
The code also doesn’t use any packer (such as WebPack), hence nothing is resolving the module files’ extensions. That’s not exactly how it works in real life TypeScript projects with multiple files, but it reduces the amount of boilerplate code drastically. Hence the import statements look like this:
The difference is the trailing .js that the ES2016 module loader requires and what a ordinary packer like WebPack would not need. Again, that’s to simplify the demo. If you take over the code to a real project you must remove the file extensions!
Overview
The example consists of three parts:
- A Web component to start the application
- The store implementation
- An observer for a publish/subscribe pattern
The publish/subscribe pattern is the underlying technique to communicate with the store. It allows the component to subscribe to store changes and refresh the UI when that happened The store itself monitors the changes initiated by reducers and invokes the publish method. The observer is a simple class, not an external library. The whole code has no dependencies.
The code is written in TypeScript. Call the TypeScript transpiler first to get executable JavaScript:
$ tsc
The Demo Component
This web component in Listing 8-5 is just to demo the usage. It’s a simple counter that can increase and decrease values.
An event handler is added for the buttons (line 26) that calls a handler method (line 30). Here we use the dataset
object that handles the HTML 5 data-
attributes as properties. After we have the right values we dispatch the action along with the payload to the store (line 33). As you can see the component does not contain any business logic. It also know nothing about the inner structure of the store. The store is global and static.
The component also monitors an item in the store, called value. That made use of a subscription , assigned in the constructor (lines 7 to 10). Once a change appears the value is written into the DOM.
All the component must know is that we have actions (INC, DEC) and these actions accept a numeric payload. Also the component must know the actual storage (value).
The Store
The store code starts with the definition of the actions. In a more complex scenario this could include the payload definition. Also, an Action
interface is often a good idea. Here I tried to make it as simple as possible. The definition just says: “Dear developer, this is what the application can handle”.
The business logic is a pure function call. That can be some calculation or even a server call to retrieve data from a REST service. In case of that make the reducers async
.
The reducer is called by applying an action, provided as key (line 2 and 7). It will also receive the current state in case you need it. The second parameter is the payload. Not all actions in an app need a payload, so either provide null
or make the parameter optional. In this example we need the payload for both actions. In line 4 and 9 is the actual logic, which is very simple here. Then the reducer returns an object with the actual change. It’s a common pattern that a reducer has no side effects – one action changes one value. But sometimes it’s necessary to change more than one, so technically it’s possible (hence the object, see line 5 and 10).
The store (see Listing 8-8) consists of two parts: The Store
class itself is a base for all stores and independent of a concrete type. To make this feasible I use a generic T.
The store has a Proxy (line 20) to supervise the changes. Whenever the state object changes the proxy calls the observer and publishes the changes to all subscribers (line 22). The trigger is the reducer, that’s called by the dispatch method. Once a component dispatches an action, the code looks that’s valid and in case there is a reducer, it’s being called (toCall, line 37).
After the reducer has made its work, the changes are applied to the store state. That’s the only point where it’s allowed to change the store (line 38). The Proxy
is triggered implicitly after this call.
Now, you define your actual store. The store is an instance of the Store<T>
class. First, we create an interface that defines the store structure. From that we derive a type that helps TypeScript to understand how to deal with the store values.
The actual store instance (line 9) is the only piece we work with later. It knows the actions, the reducers, and the state. You may wonder why the values are not provided as arrays. Using objects is better here, because in real life application the store structure might be more complicated. Assume you want to held a state per component. That’s easy because you just create another instance of the store class and assign the few values you need. Now you have a global store (usually just one) and a local store (also just one component). The objects can now easily merged to a single store.
Merging Stores
To merge you create a new store object that has both parts, actions, reducers, and states. In case you put it into the store class, a merge method could look like this:
The returned type is then a join of both types T & V
(line 1). The current object is enhanced and the existing store object (type V) is no longer in use.
The Observer
The Observer class creates the publish/subscribe pattern. It’s a singleton – one instance for all – and it handles subscriptions on a per store value base.
It has only two methods:publish and subscribe. The subscriber registers a callback (or many) – hence it’s an array of arrays. The publisher loops through all the callbacks and calls them.
Note the return value in line 37. This is two safely remove the subscriber. That’s not used in the demo code, but if you combine the Flux store with the router code shown earlier in this chapter than the components may unload. In that case the subscription is still valid and the publisher fires (into nowhere). This causes memory leaks and eventually decreases the performance. If that’s your scenario then just use the disconnectedCallback
method in the component class to call the remove method returned by the subscriber.
8.6 Summary
This chapter covered the two major concepts for creating single page applications: router and store. The router handles routes and exchanges components dynamically. The store is the single source of truth for an application and helds the state for such ephemeral components.
The demo implementations are simple yet powerful. It shows that you usually won’t need an additional library or a dependency to achieve these goals. In combination with Web components it’s now possible to handle the most complex applications in a professional way.
9 Professional Components
To further reduce the amount of code for Web components I suggest some smart enhancements. Using decorators you can even make the code even easier to read. That’s the power of TypeScript. All examples in this chapter are written in TypeScript.
9.1 Smart Selectors
When you work with the DOM you often need to use querySelector
and querySelectorAll
. Most of the dynamic of components lies in these calls. That can lead to code blocks that are hard to read. Even more critical, these blocks are hard to maintain. If the view code changes you must browse the code manually and change the selectors accordingly. Time to invent a smart selector.
The Smart Selector Decorator
A decorator definition that can be applied to a property. First, the selection of a single element is shown in Listing 9-1.
The selection of several elements is shown in Listing 9-2.
Now, in a component you use the decorator like this:
The advantage is that the selector appears exactly ones, regardless how often you use the referenced element.
How does it work?
The decorator is being executed after the component is instantiated. It’s set on a property and the call includes the property name. Because common properties in JavaScript does not have any restriction, the decorator function replaces the property by a new variant with the same name. This new property has multiple settings:
- The property is now no longer changeable (
configurable: false
). - The property is not enumerable. That means if you iterate over all properties with a
for of
loop itÄs invisible. That doesn’t change the ability to be directly accessible. - The property is readonly. It has only a getter and here we call the
querySelectorAll
orquerySelector
method.
You could also think of another way using a second parameter and combine the calls to querySelectorAll
and querySelector
into one decorator. The example should give you just an idea what decorators are for and how powerful such a little piece of code can be.
9.2 Data Binding
Almost all of the major frameworks provide data binding. In fact, the manufacturers of these frameworks often try to tell everybody that the binding is one of the core features. It’s quite helpful and it avoids writing a lot of code, indeed. Out-of-the-box the Web components doesn’t have full support for data binding. Hence, we need to write a bit code to get a similar behavior.
Why Data Binding?
Data binding is a way to bind element properties to pure code objects. Imagine a form with some text boxes. The application pulls some data from a REST service and you have to write, property by property, all the values into the text boxes. That means first selecting the right element, figure out that property to use, and write the value into it. If something must be changed, element names, property names, data types or whatever, there are several lines of code that need to adjusted. Sounds like a lot of work and a good source for errors. And it is the weak part of Web components, indeed.
Implement Data Binding
But the framework manufacturers are no magicians and the code they use internally is not so hard to understand that’s impossible to do something similar with plain TypeScript. So, let’s look how it could work.
First, to detect changes of object data, we can use a Proxy
object. That’s native ECMAScript2015 and all modern browsers have full support. A proxy monitors all properties and calls a callback function when a change occurred. You could now lookup some sort of definition, select the appropriate target element (the one bound to) and write the value into the right property.
Second, the reverse way is a little more effort. Changes to the DOM can be monitored, but changes to element properties will fire events. That means, we need to add events to any bound element and monitor these. An additional challenge is to avoid backfire. If you write the value received by an event handler back into the proxied object, the proxy would fire the binding and access the element. We can assume that the element will not fire the event again in case of input elements that are smart enough to understand a real change, but some simple events might not handle this as expected. In extreme situations this results in a loop.
As you could see, there is a lot to consider. Let’s take a look into a simplified approach. This is a component that includes all the binding code for the sake of simplicity. In a real project we would extract this part into a separate class. Let’s go step by step through the code.
To have support for types in the editor it’s recommended to use a view model. Technically it’s a simple class.
The component has a text box (input element), an output element (span) and a button to show how to change a value programmatically. All these parts are defined in the connectedCallback
method. In the constructor the view model is instantiated as a proxy (line 6). Listing 9-3 shows the complete code.
To trigger the binder we need a binding configuration. To stay HTML conform I use data- attributes. In the attribute I define two or three distinct values: model Property:element property:event name. The third value, the event, is optional. Some elements such as the output <span>
does not fire events obviously. The code is split into two parts, as per the thoughts in the beginning of this section.
The proxy intercepts the setter path and reacts to changes in the view model. It looks for bindable elements, extracts the binding instructions into the fields field and property and sets the values accordingly.
The event handler is added once after the connectedCallback
has written the DOM. The same strategy is used here. First the binding instructions is extracted, the handler is attached accordingly and the received value – determined by the binding instruction – is written into the model. The proxy checks for changes to avoid the backfire.
The button is just to demonstrate the programmatic change and how the value is shown immediately on the screen. Please note the bind
call here that is required to set the component itself as this
in the buttons click event handler. Alternatively you can use a complete function call like this (the lambda expression prevents the handler from changing this
):
Discussion
Of course, all this is extremely simple. It’s working very well, too. One possible improvement is to cache the retrieving of bindable elements. In the current code the call querySelectorAll
happens on each value change. In complex application this could take to much time.
The handling of types is not optimal, though. The model is of type InputViewModel, but the definition type is Record<string, any>
. That’s currently a requirement, because the access to the model itself is dynamically using the instance[property]
syntax. That’s perfectly working in JavaScript, but TypeScript doesn’t understand our intention and complains about the weak type usage. Here a generic could help. On the other hand, if you extract the code and write a generic class that handles the bindings, it does not matter anymore because the actual type is an abstract one anyway.
The definition in the data-bind attributes is obviously not type safe, too. An editor extension could handle this (this happens with Angular, where the editors doesn’t understand the binding too, but for Angular editor extensions exists). To become type safe and avoid the need of additional editor support we would need a template language that has both, common editor support and the ability to deal with TypeScript natively. It’s beyond the scope of this section, but this exists for sure. See more about this in the section about Template Engines further below in this chapter.
I hope you can see that the effort to have bi-directional data binding is not that huge. The actual code that’s part of the binding stuff is less than twenty lines. It’s a bit tricky in all the details, but apart from being not entirely trivial it’s not worth to add a complete framework with hundreds of kilobytes of code for such as simple thing. Write your own thin library or use an existing one that provides just this. It’s mostly enough.
Forms and Validation
The same strategy can be used for validation. The idea here is to have validation information somewhere and use the binding code to handle any reaction to validation activity. That means, once a value changes a callback is triggered to check the actual value against some rule (required, maxlength, or a pattern for instance). If that happened the outcome is usually true or false. Now use the binding code to set an elements visibility – the error message appears or disappears. The effort is a bit more, because you’ll need to handle the form’s entire state. This includes the first appearance (called pristine) where you won’t want to have all error messages to appear immediately. You need to handle the dirty state (values changed) and of course the validation state (valid / invalid).
A good approach is the usage of decorators. This could look like the following code snippet:
All the @Required
decorator needs to do is to create a hidden property that stores the validation instruction.
The validation code would now, when handling the property field, look for these magic properties and if the exists handle them accordingly.
Sketching a Solution
A working example shall show what’s the effective effort you need to implement this idea. It’s of course far from being complete and the code is reduced to the bare minimum to get it working. In fact, it lacks all error checking and is in no way universal. But more validation options, exporting the code base to external classes and error checking is just refinement, it doesn’t change the basic strategy at all.
First, let’s have a look onto the complete example. It consists of three parts:
- The decorator definition for a supposed
Required
decorator. - A view model class that’s using it.
- An example component with the glue code to get the decorator working.
The Required Decorator
The decorator has just one purpose: add a few hidden properties to the model class. It looks like this:
The name is up to you, the function determines how the decorator is named. More important is the returned function RequiredDecorator, where the name is not relevant, but it’s signature make it working. The structure and type of the parameters determines that this decorator can be placed on a property. That’s exactly what we want to do. We define three distinct properties, all with enumerable: false
. That way we can loop over the “real” properties without being inferred by the special ones. The names are dynamic to make them dependent of the concrete property the decorator refers to.
The first __req__$__ is just a marker. It always returns true. We use it as a trigger before we investigate the object further. The second is the actual validation logic. It looks up the property and returns true
if valid. Note that the object this
refers to the whole view model, not just the current property. That’s quite helpful, as we can use it to compare properties in mire complex scenarios. The third is an error message. It#s hard coded here but a more dynamic approach would be easy to us. You can add any number of parameters to this decorator to deliver a message, for example:
Here the static text is just a fallback and the developer can override the message.
Now, once we have the decorator, we can use it on a model class like this:
The decorators are “stackable”, that means you can use multiple ones on the same property. They execute in the order of definition.
In this code snippet the @Required decorator will be called first, the @MaxLength one last.
Finally, the component that uses this model. Apart from the very common definition the code activate the validation is moved to the method bindValidation.
The code loops over the model’s properties. The hidden properties from the decorator are skipped. You can still access these hidden properties by using the []
named property syntax. To stop the TypeScript transpiler complaining about such access there are two options. You can either allow this globally in the settings if tsconfig.json (allowAny
option). Of you can extend the model type to allow index access like this:
That’s definitely the better option. Using this definition we can now safely look for the trigger this.model[__req__${property}__
], pull the related fields and if they exist add the required action. In that example the selectors are data-val for any element that has to be validated and data-err for any element that can expose an error message to the user. The event listener watches the user typing and once the validation property returns false the message appears. If everything is fine, the message disappears. Once the form loads initially, the message is invisible.
9.3 UI less Components
Sometimes you want to provide functionality a view developer can use, but the function does not create any UI actually. UI less components bring a more markup first approach to an application.
Directives
This is a further development of the UI less components described in the last section. It’s not very well supported on API level, hence the solution appears not very attractive. Nonetheless it’s worth to have a look on the strategic part.
The first definition, DirectiveDemo, is UI less and just retrieves some static data. It’s added in the second component used to get it working (DirectiveDemoComponent). The API call to appendChild
is necessary to invoke the callback before the one of the hosting component is completed, otherwise the dataset would return undefined
. Now we have it as a component and can use it in all other components without worrying about references or implementation details. Also, we can add attributes for configuration.
Discussion
This example shows that Web components still lack a lot of common features provided by frameworks. The implementation effort depends on what you try to achieve. In regard to UI less components I’d suggest not using this technique and move the code to a Flux store as shown in chapter Single Page Applications or use a publish/subscribe pattern directly. If you encapsulate the code in another layer of indirection it could be easier to use, but this is nothing you can achieve with just a few lines of code. The library @nyaf documented in the appendix has a full implementation for attribute based directives. It’s effectively less than one kilobyte of code, so it’ll not be a real burden for a project, but the sheer amount of boilerplate code is significant if you think in terms of just a few components.
9.4 Template Engines
The first question about this should always be: “Do I really need this?”. If you would like to simplify the process of view creation, use any of the templating engines for JavaScript. With the powerful and convenient code style, web developers around the world have a chance to create real masterpieces.
Plugins have expanded beyond the comprehension of an average developer, and we also saw – highly anticipated – the release of ECMAScript 6; the new JavaScript standard. Frankly, ES6 was already on the way, all that needed to be done is for it to be finalized. Make sure to check out the full spec if you haven’t done so already. ECMAScript 6 improvements include better syntax for classes, along with new methods for strings and Arrays, Promises, Maps and Sets.
We keep seeing huge growth with frameworks such as Meteor, Angular and React have also made their way into the global JavaScript ecosphere. Needless to say that these have been some truly revolutionary additions to an already established system of development.
A templating engine is basically a way for developers to interpolate strings effectively. If you are a heavy front-end JavaScript developer, using a templating engine will save you countless hours of unnecessary work. And because of the vast array of templating engines available today, it can be tough to make the right choice at the right time. That said, we will take a look at the most popular and dubbed best (by the community) templating engines for JavaScript today.
Mustache
Mustache is one of the most widely known templating systems that works for a number of programming languages, including JavaScript, Node.js, PHP, and many others. Because Mustache is a logic-less templating engine, it can be literally used for any kind of development work. It works by expanding tags in a template using values provided in a hash or object. The name logic-less comes from the fact that Mustache works purely by using tags. All values are set and executed according to tags, so you end up saving yourself hours of “nasty” development work. Take a strategic shortcut if you will.
Somehow Mustache is the mother of all template engines, as it’s the original implementation of the curly braces (hence the name) syntax:
To curly braces are the trigger to switch to the dynamic part, where additional functions may follow or just a replacement with variables.
Handlebars
Handlebars is a close successor to Mustache with the ability to swap out tags where necessary. The only difference is that Handlebars is more focused on helping developers to create semantic templates, without having to involve all the confusion and time consumption. You can easily try out Handlebars yourself (there’s also an option to try Mustache on the same page) and see for yourself whether this is the type of templating engine you’re looking for. Last but not least, Handlebars was set up to work flawlessly in any ECMAScript 3 environment. In other words, Handlebars works with Node.js, Chrome, Firefox, Safari and others. The syntax is almost the same as for Mustache.
jQuery Templating
jQuery Templating provides all the necessary you are looking in a templating engine for JavaScript. It is a tool that you will find no trouble using. Not only that, it is fast, uses valid HTML5 and utilizes only pure HTML for templates. On the other hand, you can also pick up a jQuery object as the template. You can quickly populate the templates by simply calling jQuery.loadTemplate. jQuery Templating also ensures a clean final product, meaning, the data will be flowing smoothly. Head over to the official website of jQuery Templating, learn how it works and how to apply it and make a difference.
The basic idea is more HTML driven, using the dataset properties:
This is a so-called markup first approach, where you write pure HTML and enhance it according to your needs.
Lit Element (lit-html)
Lit Element is part of the Polymer project, one of the first (and still best) Web component thin libraries. Here the JavaScript part provides the initial call and a function is used to activate the templating engine.
The single quotes are backticks, a pure JavaScript function. The crucial part is the way the function named html
is called. Instead of using a regular function call with round brackets this call writes the string directly after the function name. Because the backticks are in fact string interpolations, the JavaScript engine treats this syntax as a special function call. The receiving function does not get a string, instead, it receives an array of fragments that consists of the pure text parts and the interpolation part ($). That way the engine can replace and process the dynamic parts very easy.
The biggest advantage is speed. The engine is mostly pure JavaScript, does not need much template code and the replacements happen on a very basic level.
JSX / TSX
One of the best choices is JSX (if you use TypeScript it’s called TSX, but it’s exactly the same syntax). It’s invented by Facebook for their famos UI library React. The idea is fundamentally different from all other templating engines. While in all other engines the HTML markup is a first class citizen and the dynamic part, the scripting stuff, is added by some magic syntax, JSX is primarily JavaScript. The script part is now the first class citizen and thr HTML part is embedded once needed. This part, the markup, is not forwarded to the browser but parsed and replaced by JavaScript. Technically each element is transformed into a function call. These chain of of function calls return HTML later.
This sounds complicated and the code behind is far from being trivial. But it has a real advantage. You can code in your template and do almost everything by just using JavaScript (or TypeScript). So, instead of learning some new syntax with all the typical rough edges, you work with what you already know perfectly well.
But the best thing about JSX is that any modern editor can handle this, without additional plug-ins or extensions. And those who can’t have such an extension for sure. The TypeScript transpiler understands JSX very well and even here you don’t need to add anything, just tell the tsconfig.json that you use JSX.
Let’s see an example to give an impression:
This is simple yet powerful, because you handle the HTML part (note that there are no braces or quotes here) just like code. In fact, it is code at runtime (after transpiling).
But how does this work? The transpiler replaces the code with simple functions calls:
That’s it. What a typical library provides is the code that makes createElement working. However it looks like this is totally up to the library author. You can deduct from this, that there is absolutely not relation to React anymore. React is just one, very good and complete, implementation of such a library.
9.5 Make your Own using JSX
On the first sight this sounds simply crazy. The library React where JSX was used the first time, is complex and well developed. Nothing you can reproduce easily. But you won’t need all of React, and stripping it down to the bare templating part it’s astonishing easy.
Activate JSX
To activate JSX we use the TypeScript transpiler. There are other options, such as Babel, to use Pure ES2015 and beyond, but TypeScript is definitely the most flexible one. The configuration file tsconfig.json has two settings we actually need:
- “jsx”: This should be set to “react”.
- “reactNamespace”: This should be name of our implementation.
Don’t worry about the “react” setting. It has nothing to do with React. We just want to mimic its behavior. The name off the namespace is the implementation. Usually it’s JSX, but any name will do it. Let’s keep the default for now.
Implementing JSX
The transpiler is quite simple. It replaces all the JSX calls with a function call named createElement. The createElement function has three parameters:
- The element’s name
- An object with all attributes
- An object that provides the element’s children
The third parameter is usually again a call to createElement, where the cycle continues down the tree of elements.
The root element is the one we touch in our component. What the method returns is up to you. If you plan to assign the rendered template to innerHTML
than the return type would be string
. If you deal with DOM operations an instance of Node
or even HTMLElement
would be sufficient.
Let’s see an easy example that handles just pure HTML.
Why all this? Imagine you write such a piece in your script, because you need some HTML;
This is pure JSX, but neither JavaScript nor the browser can read this. The transpiler is doing us a favor and transforms this to function calls:
We have previously defined that a string value is sufficient in our app. Hence the function call shall produce something like this:
So, why not writing this directly? Of course, this will work. But JSX is very sell supported by editors and the way we wrote it in the first example the editor would produce some nice syntax highlighting, point to common mistakes and starts understanding proper HTML semantics. That’s a big advantage. Moreover, if you find some HTML you just copy and paste it into your template – as is. If the snippets grow just use multiple lines. Don’t worry about using quotes with concatenation or backticks or whatever, this is no longer relevant. JSX makes no difference between script code and HTML.
The best thing comes next. What if you want to embed dynamic parts. Here you can use the curly braces shown earlier in this chapter. Just use single braces where code access is needed:
The final code is not just a replacement, it is the original code and appears like this:
That means, the code is pure JavaScript, with no restriction at all. It just need to fit in as a parameter. That’s said you can use the trinary operator expression ? true : false
, but you cannot use keywords such as if
or while
.
Extending the Syntax
Because we’re now in the position to control the render process, it’s easy to introduce additional templating features. I’m not going into much detail here. Adding support for a new template language on top of JSX would contradict the simplicity of the whole approach. But a few tweaks could be helpful. Let’s assume that in your code the usage of conditional rendering happens very often.
In bigger components this could lead to code blocks that become hard to read and heavily fragmented. Isn’t it easier to read something like this:
However, if is no valid HTML attribute and it’s exactly tha kind of dynamic that’s the reason for all the templates engines.
9.6 Summary
In this chapter I covered some professional coding styles and advanced subjects. Depending on your project and the concrete requirements this shall guide through the obstacles of huge applications or particular challenges. The examples include the usage of decorators, ideas to implement data binding, and a strategy to add an ab abstract validation layer. Last, but not least, a custom implementation of the famous JSX templating style is shown.
Introducing @nyaf
The name @nyaf is an acronym for “Not Yet Another Framework”. It is, in fact, an entirely new concept of Web development support libraries, a so called “thin library”.
It’s simple, has a flat learning curve, doesn’t need any special tools. Keep your tool chain, get the power. It can replace all the complex stuff such as React or Angular entirely, indeed.
No dependencies! No bullshit! Pure HTML 5 DOM API and ES 2015 Code. Super small, super smart, super powerful. Period!
Write frontend apps without the hassle of a complex framework, use the full power of HTML 5, keep a component based style.
Elevator Pitch
Since the amazing impact of jQuery in 2006 we have seen an uncountable number of JavaScript frameworks. Some good, some nice, a few excellent. Each time has it’s leading frameworks and an audience that loves it. This comes from simple properties. It should save time compared with programming on a more basic level. It should give stability and reliability to your apps where things in the browser’s internal parts get messy. And it adds another layer of indirection to make things smooth and good looking, nicely maintainable, and well architectured.
But over time, frameworks get elder. And they can’t change and involve, because they already have a broad audience and hundreds or thousands of projects rely on them. The manufacturer can’t break everything to go the next step. The programmers get stuck. And the world of browser programming has evolved dramatically. Meanwhile, we have an amazingly powerful native API in HTML 5.
One of the most important innovations in browser development where Web Components. The API is easy to learn, the support is complete for all modern browsers, and the implementation is stable. At the same time the programming language TypeScript came to us along with a powerful toolset.
It’s time for the next step. Take the leading tools and create an easy to use library, that covers the hard stuff and be invisible where the native API is almost the best. That’s the core idea behind @nyaf.
Parts
The library comes in three parts:
- A core library that handles Web Components the easy way, provide a router for Single Page Apps, and adds a nice template language.
- A forms library that handles data binding and decorator based validation.
- A store library that gives your app a state engine using the common flux architecture style.
Everything else is simple HTML 5 API, without any restrictions. You can add CSS, other libraries, or your own stuff at almost any position.
Additionally, there is small CLI for easy setup and component creation.
Project Configuration with TypeScript
An @nyaf application consists of:
- An entry file for registering components, typically called main.ts
- At least one root component
- The index.html file the browser loads first
- The configuration for TypeScript, tsconfig.json
- The Packer / Builder setup;
The best choice for a Packer is probably WebPack, in that case a webpack.config.js file is recommended.
The Entry File
The recommended folder structure looks like this:
The application starts with the code in main.ts and the basics structure looks like Figure A-1.
TypeScript Configuration
The TypeScript configuration is typical, but two things are crucial to know:
- You need to compile with the target “es2015” (minimum). ES 5 is explicitly not supported anymore.
- The template language is a variety of JSX, so the setting “jsx” and “reactNamespace” are required.
@nyaf does not use React, has no relation to React and has almost nothing in common. The setting just tricks the compiler to transpile the templates.
First, the target must be “es2015” or higher. There are some native features used here that don’t have polyfills. The recommended template language is JSX (or in TypeScript it’s called TSX). It’s not enforced, you can also use pure string templates, but all examples in this documentation and the snippets shown online are using JSX. Hence the following settings are highly recommended:
- “jsx”: “react” – this activates JSX, though we don’t use React
- “reactNamespace”: “JSX” – the name of the support class in @nyaf (this is mandatory if JSX is used)
All other settings follow the common requirements of a typical TypeScript application.
WebPack Configuration
WebPack is the recommended packer tool, but you can use any other if you like. There is no dependency.
A typical configuration will look like this:
See the comments inline for important explanations. Apart from this the configuration has no special settings and follows the common rules of a typical WebPack setup.
Project Configuration with Babel
If you don’t want to use TypeScript, you can still get the full power of @nyaf. All features that the package requires are provided by ES2017 and above. The recommended tool to setup a package for ES2015 any modern browser supports is Babel.
This section describes the setup and usage with pure ECMAScript.
Setup the Environment
If you use Visual Studio Code it’s recommended to tell the editor the specific features you use, especially decorators. To do so, add a file jsconfig.json in the project root and add this content:
This assumes you sources are in the folder ./src. Adjust the settings according your needs.
Project Dependencies
Next add the following dependencies to your project’s package.json. This is the current Babel 7 setup.
This setup allows the compilation and packaging with WebPack, but the transformation invoked from WebPack is based in the Babel plug-ins.
Configuring Babel
Next, configure Babel to support the features @nyaf needs. This is primarily the JSX namespaces, that are different from React. It’s similar to the procedure described for TypeScript. However, the settings look a bit different.
You can use either .babelrc or the settings in package.json. The following example shows the settings in package.json (on root level).
The core settings you’ll need are preset-react and plugin-proposal-decorators. The first activates the compilation for the JSX namespace JSX.createElement
. This is the exact and complete call to the @nyaf JSX module. The second parameter pragmaFrag is the support for the <></>
fragment syntax. In React it’s React.fragment. In @nyaf it’s just nothing, as the JSX module treats a missing element information as fragment. To enforce this, we provide null
.
The decorator support is provided by a plugin. Babel takes care to compile this using a polyfill so it runs on the selected ECMAScript version.
Configure WebPack
The Babel transpiler can create a bundle, but putting it all together requires additional steps. The most powerful way (not always the easiest) is WebPack. The following webpack.config.js file is all you need to setup WebPack to create a bundle using Babel:
The entry point is the file main.js. All component files have the extension .jsx, so we need to resolve that extension, too. Apart from this the babel-loader invokes the Babel transpiler and that settings, described above, apply here. The bundle is copied to the distribution folder dist and the bundle is added to the HTML file using the appropriate plug-in.
Writing Components
The components look exactly like the ones using TypeScript, apart from missing types and generics. Let’s assume you have this index.html:
This requires to load and upgrade one component. To do this, you need the start procedure in main.js:
The two demo components are shown below.
As you can see you use JSX and decorators along with ES2018 import/export instructions.
Improvements
Imagine a main file like this:
The import from @components makes it so much more convenient. To setup this local path resolution you need to create an index file for your components:
Then, set an alias in webpack.config.js to resolve this file:
To let Visual Studio Code accept this, too, add this jsconfig.json (look for the key paths):
Both, the alias’ for WebPack as well as the paths key can handle multiple entries for more complex setups.
Bundle Size
For the demo files shown in the code above the whole bundle is 43.7 KB (11.6 KB zipped). The HTML remains with 230 Bytes (squeezed).
With all the loader and polyfill stuff this is an extremely small footprint for a client app. Forms and Flux Store would add another 10 KBytes roughly.
The @nyaf CLI
The @nyaf CLI currently creates TypeScript projects only. To use Babel and pure JS refer to the documentation in this section.
Components
Components are the core ingredients. You write components as classes, decorated with the decorator CustomElement
. This defines a Web Component. The component must be registered, then. This is done by calling the static method GlobalProvider.bootstrap
.
Registration Support
To support the registration as mentioned before we use decorators. This makes it quite easy to define a component without knowing the details of the browser’s API. The name is determined by @CustomElement('my-name')
. This is mandatory. The name shall follow the common rules of Web Components, that means, it must have at least one dash ‘-‘ so there is no risk of a collision with common HTML element names.
Let’s go step by step through this simple component.
First, the import includes not only the decorator, but the type JSX
too. That’s necessary, if you want to use JSX (or TSX) and let the TypeScript compiler translate the HTML syntax properly. The supporting class comes from @nyaf/lib and has absolutely no relation to React. It has, in some details, a different behavior compared with the JSX used in React. The import is necessary, even if there is no explicit usage in the module. Both, the TypeScript transpiler and linter (such as TSLint) know about this and will not complain.
Second, the component has a base class. All @nyaf components are derived from HTMLElement
. Currently we don’t support inheriting from other element types.
Note also the usage of a base class, which gets a generic that later controls the access to the attributes.
Now, that the component is defined, it must be registered. In a file called main.ts (or wherever your app is bootstrapped) call this:
That’s it, the component works now. Use it in the HTML part, usually called .index.html:
Once you have more components, it may look like this:
The First Component
This section describes how to bring the component to live. I assume that you have already a typical TypeScript setup with tsconfig.json, package.json, and your favorite packer.
Create a file main.ts
in the src folder that looks like this:
Create file main.component.tsx in the same folder (It must be *.tsx if you use JSX). Fill this content in:
Watch the default import for JSX - this is required, even if there is no explicit call. The TypeScript transpiler needs this when handling JSX files. It’s always
JSX
, even if we use *.tsx-files.
Create a file named index.html in the very same folder and fill it like this:
Your app starts in line 10.
Using the packer configuration you get the index.html file in the ./dist folder, a bundle, and a reference to this bundle to load the script. If you pack manually or keep the scripts separately add the script tags before the closing <body>
element.
Template Features
Template Features avoid using creepy JavaScript for interactions and branches. You can use any of the following:
-
n-if
,n-else
-
n-hide
,n-show
-
n-on-<event>
(see section Events) n-expand
n-if, n-else
The value will be evaluated and the element using this attribute does or does not render.
If there is an else-branch it can direct to a slot template. <slot>
elements are native web component parts.
n-hide, n-show
These attributes work the same as n-if
, but just add an inline style display: none
(or remove one) if true
(n-hide
) or false
(n-show
).
n-expand
This attribute expands a group of HTML attributes. Imagine an element like this:
You may need this several times, each with different id. Instead of repeating the whole set of attributes, an expander can be used to add the static parts.
To define the expander shown above you create a class like this:
And yes, these are equal signs in the class. The named ‘quoted’ properties are only required if the attribute name contains dashes. Finally, add the definition to the global provider:
That’s it, a lot less to write without the effort to create components. It’s just text-replacement before the renderer grabs the content, so no performance impact at runtime. The expander logic does not perform any kebab-pascal conversion as some other tools do (that means, the name myProp does not appear as my-prop automatically).
Quick Expanders
Quick expanders are even easier, but more for local expanding.
It’s just pure ECMAScript magic, no code from @nyaf required.
n-repeat
The basic idea of TSX is to write traditional code using map
or forEach
on array to create loops. In most cases this is the best solution. It provides editor support and you can add the full range of JavaScript API features to adjust the result. But sometimes a simple loop is required and the creation of a complete expression creates a lot boilerplate code. In that case two variations of loops are provided, both with full editor support, too.
The n-repeat Component
This is a smart component that acts as a helper for common tasks. It’s supported by one functions for binding:
-
of
: Creates an expression to select a property from a model. The only reason is to have editor support (IntelliSense) without additional tools.
The n-repeat Attribute
Also, a @nyaf template function with the same name exists. This is supported by two other function for same reason:
-
from
: Define a data source for repeating; must be an array of objects. -
select
: Select a property from the object type the array consists of.
Both examples would work with a type definition like this:
In the component the data assignment looks like this:
JSX / TSX
Fundamentally, JSX just provides syntactic sugar for the code line JSX.createElement(component, props, ...children)
function. The transformation and conversion to JavaScript is made by the TypeScript transpiler. In case you use pure JavaScript, the best tool to compile JSX is Babel.
Be aware, that while the main framework with native JSX support is React, @nyaf has absolutely no relation to React, and the behavior of the code is different.
Introduction
The next examples assume that some code surrounds the snippets or is just the return value of the render() method.
See some TSX code used in a component:
This piece of code compiles into the following function call:
JSX Scope
Since JSX compiles into calls to JSX.createElement
, the JSX class must also always be in scope from your TSX code.
For example, both of the imports are necessary in this code, even though React and CustomButton are not directly referenced from JavaScript:
Note that this is a default export, so no curly braces here!
If you don’t use a JavaScript bundler and load @nyaf from a <script>
tag, it is already in scope as a global object named JSX
.
The elements used in the JSX parts are registered globally and there is no additional import required. That’s a fundamentally different behavior in comparison to React. In React the first argument is a type and the elements will render itself based on the given type. In @nyaf the first argument is a string, and the constructed element is pushed to the browser as string through
innerHTML
, and the browser renders the content directly using native code.
Examples
You can also use the self-closing form of the tag if there are no children.
This piece of code compiles into this JavaScript:
If you want to test out how some specific JSX is converted into JavaScript, you can try out the online Babel compiler. Be aware that in case of any JSX oriented tools not explicitly configured for @nyaf may create the code with the namespace React
. In fact, the online Babel transpiler creates something like this:
That’s pretty much the same, so it will work as learning tool, but keep the changed names in mind.
Specifying the Element Type
The first part of a TSX tag determines the type of the element. It’s the name of a registered Web Component.
Web Components Must be in Kebab Style
When an element type starts with a lowercase letter, it refers to a built-in component like <div>
or <span>
and results in a string ‘div’ or ‘span’ passed to JSX.createElement
. Types that have a dashed name like <my-foo />
compile to JSX.createElement('my-foo')
and correspond to a component defined globally through GlobalProvider
.
We recommend naming components always with kebab style.
Properties in TSX
There are several different ways to specify properties in TSX.
JavaScript Expressions as Properties
You can pass any JavaScript expression as a property, by surrounding it with curly braces ({}
). For example, see this TSX:
For ‘my-component’, the value of props.foo will be 10 because the expression 1 + 2 + 3 + 4 gets evaluated.
if
statements and for loops are not expressions in JavaScript, so they can’t be used in TSX directly. Instead, you can put these in the surrounding code. For example see this snippet from a component class:
This method uses TSX expressions directly in the code. There is no relation with the render method, the expressions can appear everywhere and they will return always string
value.
You can learn more about conditional rendering and loops in the corresponding sections.
String Literals
You can pass a string literal as a property. These two TSX expressions are equivalent:
When you pass a string literal, its value is HTML-unescaped. So these two TSX expressions are equivalent:
This behavior is usually not relevant. It’s only mentioned here for completeness.
Properties Default to True
If you pass no value for a property, it defaults to true. These two TSX expressions are equivalent:
In general, we don’t recommend not passing a value for a property, because it can be confused with the ES2015 object shorthand {foo}
which is short for {foo: foo}
rather than {foo: true}
. This behavior is just there so that it matches the behavior of HTML.
Spread Attributes
If you already have properties as an object, and you want to pass it in JSX, you can use … as a “spread” operator to pass the whole object. These two methods are equivalent:
You can also pick specific properties that your component will consume while passing all other props using the spread operator.
In the example above, the kind property is safely consumed and is not passed on to the <button>
element in the DOM. All other properties are passed via the ...other
object making this component really flexible. You can see that it passes an onClick and children properties.
Spread attributes can be useful but they also make it easy to pass unnecessary properties to components that don’t care about them or to pass invalid HTML attributes to the DOM. It’s recommended using this syntax sparingly.
Children in TSX
In TSX expressions that contain both an opening tag and a closing tag, the content between those tags is passed as a special property this.children. There are several different ways to pass children.
String Literals
You can put a string between the opening and closing tags and this.children will just be that string. This is useful for many of the built-in HTML elements. For example:
This is valid TSX, and this.children in MyComponent will simply be the string “Hello world!”. HTML is unescaped, so you can generally write TSX just like you would write HTML in this way:
TSX removes whitespace at the beginning and ending of a line. It also removes blank lines. New lines adjacent to tags are removed; new lines that occur in the middle of string literals are condensed into a single space. So these all render to the same thing:
TSX Children
You can provide more TSX elements as the children. This is useful for displaying nested components:
You can mix together different types of children, so you can use string literals together with TSX children. This is another way in which TSX is like HTML, so that this is both valid JSX and valid HTML:
The render method of an @nyaf component can also be async:
You can retrieve data asynchronously directly in the render method. I trivial examples this could dramatically simplify the component construction.
Expressions as Children
You can pass any JavaScript expression as children, by enclosing it within {}
. For example, these expressions are equivalent:
This is often useful for rendering a list of JSX expressions of arbitrary length. For example, this renders an HTML list:
JavaScript expressions can be mixed with other types of children. This is often useful in lieu of string templates:
Booleans, Null, and Undefined
The values false
, null
, undefined
, and true
are valid children. They simply don’t render. These TSX expressions will all render to the same thing:
This can be useful to conditionally render elements. This TSX renders the <app-header /> component only if showHeader is true
:
One caveat is that some “falsy” values, such as the 0 number, are still rendered by React. For example, this code will not behave as you might expect because 0 will be printed when data.messages is an empty array:
To fix this, make sure that the expression before &&
is always boolean
:
Conversely, if you want a value like false
, true
, null
, or undefined
to appear in the output, you have to convert it to a string first:
Select Elements
Using the HTML 5 API can be boring. Instead of using querySelector
in the component’s code, use an decorator:
The element is filled with the real object, then.
Smart Components
Some features do not require additional code, they just need a clever usage of the power of TypeScript and Web Components. To simplify your life, a few of these are predefined as integrated components - the Smart Components.
Repeater - n-repeat
The repeater component creates a loop. In the following example and interface defines a single item. An array with items of this type is provided.
The repeater repeats the array’s elements. Each element provides properties you can place anywhere in the body using the of<Type>
operator. It’s type safe, the editor will help you selecting the right properties from the given type.
Transparent Outlet n-outlet
This is another outlet that renders into nothing. Normally you would do this:
But that would place your component in a DIV element. If this is disturbing, just use this:
Also, a named variety is available:
Render Finisher n-finish
Web Components render according their lifecycle. However, if you have a mix of components and regular HTML elements, the behavior can be weird, because the regular elements doesn’t have a lifetime. The best solution is to have a pure tree of web components. But if that is not possible and a predictable execution path is necessary, you need to tell the render engine when it’s really safe to render the parent element. To do so, add the element <n-finish />
like this:
In that example the component waits for the lifecycle events of some-component but will render everything else immediately. If some-component exposes <li>
tags too, they could appear after the static ones. If the order matters, the <n-finish>
element helps enforcing the execution order.
The Life Cycle
Components have a life cycle. Instead of several events, there is just one method you must override (or ignore if not needed):
Note, that the method has lower case “l”. The LifeCycle
-enum (upper case “L”) has these fields:
-
Init
: Start, the constructor is called. -
Connect
: Component connects to backend -
SetData
: A change in the data object occurred. -
Load
: The render process is done and the component has been loaded -
PreRender
: The render method has been called and content is not yet written toinnerHTML
. -
Disconnect
: Component is going to be unloaded. -
Disposed
: After calling thedispose
method.
The life cycle is also available through an event lifecycle
. It’s exposed via a property called onlifecycle
on the element level, too. The events are fired after the internal hook has been called.
State and Properties
There is no explicit difference between State and Property. Compared with React it’s much more simpler. A state still exists and it supports smart rendering.
State
To declare a state object use a generic like this:
The State generic is optional. If there is no state necessary just use
any
or an empty object such as{}
.
Now two functions are available:
-
data
: Returns the instance of the data object and contains all properties defined in the generic. This is protected and only available within the class. -
setData
: Sets a changed value and, if the value differs, re-renders the component.
A simple counter shows how to use:
Properties
Property names in JavaScript are in camel case while HTML attribute names are in kebab case (dash-separated) to match HTML standards. For example, a JavaScript property named itemName maps to an HTML attribute named item-name.
Don’t start a property name with these characters:
-
on
(for example, onClick) -
aria
(for example, ariaDescribedby) -
data
(for example, dataProperty)
Don’t use these reserved words for property names.
slot
part
is
To use properties, you must define those. Each property is automatically part of the state and once it changes, the component re-renders.
The initializer with default’s is not optional, you must provide an object that matches the generic.
This is how you use such a component (part of the render method):
The @Properties
decorator defines all properties, that are now monitored (observed) and hence the value is evaluated and rendered. If the value changes the component renders itself automatically.
Accessing Properties
The access using the property with data
is internally and externally available. That means, you can retrieve a component and set values like this:
As with setData
internally this will trigger the renderer to re-render the content with the new values, but in this case the trigger is outside the component.
Data Types
Web Components have the restriction that an attribute can transport string values only. This would lead to “[Object object]” for other types.
@nyaf** overcomes this restriction with a smart attribute handling.
That means the object is being recognized and stringified to JSON. Additionally, a custom attribute with the name “__name__” is written. Assume your value is written like shown below:
The rendered component would look like this:
Apparently the double double quotes work just fine. However, the content is now a string. If you do operations on this it will not resolve as the array it was before. Here the second attribute will trigger a different behavior. The hook for the data Proxy used internally is now applying a JSON.parse
and returns the former object. Also, once set again, the incoming value is checked for being an object and stringified, then. The technique currently works for string
(default Web Component behavior), number
, boolean
, array
, and object
.
For extremely huge complex objects this technique might produce a performance penalty due to repeatedly used
JSON.parse
/JSON.stringify
calls. Be also aware that this cannot work if the object has recursive structures, because the JSON class cannot deal with this. There is no additional error handling to keep the code small, it’s just atry/catch
block that reports the native error.
Properties and Models
For a nice looking view some decorators applied to class properties control the appearance.
Within the component, this model now present. In the above definition this.data
contains an actual model. The forms module contains a more sophisticated way to handle a view model with bi-directional data binding. the properties discussed here are for access from a parent component, while the form’s module view models handle this internal binding.
Directives
Directives are extensions to host components that are bound to attributes. Think of it like smart handling for data, events, or actions.
Make a Directive
To make a directive you use the @Directive
decorator and the base class BaseDirective
. The directive helps registering the class. The base class supports the editor and type safety.
A simple example shows how to make any element draggeable.
Directives are activated by any kind of selector querySelectorAll
can process. In the example we use the [directive=”drag”] selector, which is an attribute with a value. To apply this directive, two steps are required.
- As always, you must register your directive first
- You apply the selector to any element (standard HTML, Web Components, or own stuff - it works everywhere)
Registration
The registration is part of the GlobalProvider
’s bootstrap process:
Activation
The directive applies ones a component renders. That means, the directive must be part of an @nyaf web component. But the actual assignment can be placed on any HTML element. If you have just one global component and pure HTML in it, then the directive will still work.
To activate the directive just add the selector to an element:
The element becomes now a host element for the directive. One directive can be applied to many elements. They are isolated instances. For each occurrence of the selector a new instance of the directive class is created.
Working with Host Elements
To get access to the host the directive shall modify a property host that is provided by the base class. It’s available immediately after the super
call of the constructor and injected as a constructor parameter. That’s mandatory.
After the constructor call the infrastructure calls a method setup. It has no parameters and is not awaitable. It’s a good point to add event listeners or add further modifications to the element as shown in the example above.
The host element is aware of a shadow DOM, so it might be the host’s element object or a shadowed element. This depends on the usage of the @ShadowDOM
directive. There is nothing special here, you can use it directly. The type cast is HTMLElement
. That means in TypeScript the properties specific to shadow DOM are not available in the API. In JavaScript they are still present, though, that means you could enforce a cast like this.host as unknown as ShadowRoot
. Usually, that’s a very rare situation anyway. The idea behind this behavior is that we want to make the shadow DOM as transparent as possible, without forcing the developer to think about it.
Events
Events are defined by a special instruction. They are attached to document
object, regardless the usage.
n-on-event
Events are easy to add directly using it like n-on-click
. All JavaScript events are supported. Just replace ‘click’ in the example with any other JavaScript event.
There is no
bind
necessary, events are bound to components anyway.
You can get the (original HTML 5 API) event using a parameter, like e in the example below:
Because the method can be bound with or without the event object as a parameter, the method can have an optional parameter like this:
The Event
type conforms to HTML 5 DOM. Replace the type according the attached event (MouseEvent
etc., see here for details).
Syntax Enhancements
This section shows some variations of the event syntax that might better suit your needs.
Short Form
If you don’t need access to the parameters of the event (example: a click, which just happens), a short form is possible:
Additional Parameters
You can add constant values like this:
Warning! Regardless the type, the received value will be a
string
type at runtime.
This works, but the function will receive “100”.
This works, too, but the function will receive “1 + 2”. The expression is not being executed! So, this is somehow limited in the current version. You can add multiple parameters, though.
Usually, it doesn’t make sense to have calculation on constant values. So in reality this isn’t a serious limitation.
Async
You can combine any event with the attribute n-async
to make the call to the event’s handler function async. This attribute does not take any parameters. The handler method can be decorated with async
.
Custom Events
Sometimes the JavaScript events are not flexible enough. So you can define your own ones. That’s done by three simple steps:
- Add a decorator
@Events
to declare the events (it’s an array to declare multiple in one step). This is mandatory. - Create a
CustomEventInit
object and dispatch it (this is native Web Component behavior) - Use the
n-on-<myCustomEventName>
attribute to attach the event in the parent component.
Imagine a button component like this:
The custom event in this example is called showAlert. It’s invoked by a click. The element’s host component has code like this:
The argument e contains an CustomEvent
object. It can carry any number of custom data. The click
-invoker is just an example, any action can call a custom event, even a web socket callback, a timer, or an HTTP request result. Both CustomEvent
and CustomEventInit
have a field detail
that can carry any object or scalar and is the proposed way to transport custom data with the event. The event handler could look like this:
Custom events can be async, too. Just add
n-async
to the element that fires the event and add theasync
modifier to the handler.
Router
Usually we create SPAs (Single Page Apps). Hence we need a router. The included router is very simple.
First, define an outlet where the components appear:
Any kind of parent element will do. The router code sets the property innerHTML
. Components, that are being used to provide router content need registration too. They must have a name, too, because that’s the way the router internally activates the component.
There is just one default outlet. See further below for using named outlets.
Register Routes
The following code shows how to register routes:
The first entry '/': { component: DemoComponent },
shall always exist, it’s the default route loaded on start. It’s being recognized by the '/'
key (the position in the array doesn’t matter).
The entry '**': { component: DemoComponent }
is optional and defines a fallback in case an invalid path is being used.
You can shorten the property in the bootstrap script, too:
Using Routes
To activate a router you need a hyperlink. The router’s code looks for a click onto an anchor tag. An appropriate code snippet to use the routes looks like this:
The important part here is the n-link
attribute. Using this you can distinguish between navigation links for routing and any other anchor tag. You can also use a <button>
element or any other. Internally it’s just a click
-event that’s handled and that checks for the attribute, then.
Please note the hash sign (#). It’s required. No code or strategies here, write it by yourself and then enjoy the very small footprint of the outcome.
Pro Tip! Import the router definition and use additional fields to create a menu directly from router configuration.
If you have some sort of CSS framework running that provides support for menu navigation by classes, just add the class for the currently active element to the n-link
attribute like this:
After this, by clicking the hyperlink, the class “active” will be added to the anchor tag. Any click on any n-link
decorated tag will remove all these classes from all these elements, first. The class’ name can differ and you can add multiple classes. It’s treated as string internally.
Named Routes
The underlying route definition, the type Routes
, allows two additional fields (outlet
and data
):
With outlet
you can define a named outlet. If you use this, you must name all routes as there is no fallback currently. The route outlet might reside everywhere. It may look like this:
If the route’s components deliver <li>
elements, you can also use something like this to build well formatted HTML:
There is no difference on the link side, the decision to address another outlet is made in the configuration only. If the outlet doesn’t exist nothing happens and a warning appears on the console (in DEBUG mode).
In the example I use routes that look like child routes. That’s a hint for the intended behavior, but it’s technically not necessary doing so. The resolver is very simple and doesn’t care about routes, it’s just matching the string and seeking the outlet.
Additional Data
The last example showed another field data
. This is a dictionary with arbitrary data just stored here. If you setup a navigation dynamically based on the configuration data you can control the behavior in a well defined way. However, there is no code intercepting these data, it’s the task of the implementer to do something useful here.
Special Values
If you use data: { title: 'Some Title' }
the value in the field title is being copied to the websites title
field. That way it appears on the tab (or header bar in Electron). If it’s omitted, it’s not being set at all.
Navigate to Route
You can navigate by code:
The outlet is pulled from configuration, but if provided as second parameter it can be overwritten.
Hint: In the link elements you use the ‘#’ prefix. In the
navigateRoute
method this is not necessary and hence not allowed.
Route Events
The router fires two events, available through the static GlobalProvider
class like this:
If you have a dynamic component and you set the event handler, don’t forget to remove the event handler in the dispose callback.
Shadow DOM
By default the shadow DOM is not used. If it would, it would mean, that styles are isolated. No global styles are available, then.
One option to activate the Shadow DOM is using this decorator:
A parameter can be set explicitly. This is some kind of coding style, a more expressive form.
Another interesting option controls the style behavior:
- The decorator ShadowDOM must be set, otherwise the decorator @UseParentStyle does nothing
- If active, it copies all global styles into component so they work as expected even in Shadow DOM
It’s a trade-off. The shadow DOM increases performance and brings isolation. Copying many styles decreases performance and contradicts isolation.
See the following example for a common usage scenario:
The shadow DOM goes well along with the usage of slots. A typical example is a Tabs Component that’s shown next. Tabs are a form of navigation for web sites, similar to the browser’s tabs.
Example with Shadow DOM
First, we start with the definition of a single tab.
Single Tab
The <slot>
element is the content target. The id is used to address the tab (to open it, actually).
Tabs Container
Second, look at the container that handles multiple tabs.
Usage of the Tabs
The usage is quite simple. Just add as many tabs as required:
Shadow DOM and Styles
The Shadow DOM provides full isolation. The @UseParentStyles
decorator contradicts this. A better way is to include styles “per component”. Have a look onto an example first:
The important part here is, despite the @ShadowDOM
decorator, the part
attribute. That makes the shadowed component accessible (penetrable) for special external styles using the ::part
pseudo-selector. A stylesheet could than look like this:
This style is provided globally, not as part of the component, but it applies to this component only and only in shadow mode.
Note, that using the regular CSS syntax, such as app-directive[part="drop-zone"]
would not work, as this cannot penetrate the shadow DOM.
This is not a feature of @nyaf; it’s default Web Component behavior. We face some issues with elder browser version that don’t understand the
::part
selector properly. Consider adding a polyfill if needed.
Services
Once in a while we need to get access to an injectable service. That’s also a task for a decorator to extract that kind of infrastructure code from the component’s body.
this.services is a function, that returns an instance of the service. Services are singleton on the level of the local name. The same name used in different components will return the same instance. Using a different name will create a new instance.
Async is an option, can by sync, too. However, the render process is always asynchronous internally.
The third option of @InjectService
allows to define a singleton. Instead of providing a type for the second parameter of the decorator, here you must provide an instance. The same name will be shared across components.
Forms Module
Forms provide these basic features:
- UI control decorators (example:
@Hidden()
to suppress a property in a dynamic table). - Validation decorators (example:
@MinLength(50)
or@Required()
to manage form validation). - Data Binding using a model declaration decorator called
@ViewModel
and a bind attribute namedn-bind
.
Form validation is a key part of any project. However, CSS frameworks require different strategies to handle errors and so on. Hence, the @nyaf/forms library provides a simple way (just like a skeleton) to give you the direction, but the actual validation implementation logic is up to you to build.
Same for the UI decorators. It’s a convenient way to add hidden properties to viewmodels. There is no logic to read these values, this is up to you to implement this. However, the decorators makes your life a lot easier.
The binding logic is almost complete and once you have a decorated model it’s syncing the UI automagically.
How it Works
For full support you need view models, the registration on top of the component, and access to the model binder.
- View models are plain TypeScript classes with public properties enhanced by decorators.
- The registration with the decorator
@ViewModel()
on top of the component’s class. - The modelbinder comes through implementing the interface
IModel<ViewModelType>
.
View Models in Components
For a nice looking view some decorators applied to class properties control the appearance. Use the decorator @ViewModel<T>(T)
to define the model. The generic is the type, the constructor parameter defines the default values (it’s mandatory). To get access to the model binder, just implement the interface IModel
as show below:
Within the component, the model binder is present through the property this.model
. That’s the only property and it’s added automatically by the decorator. The interface just helps the TypeScript transpiler to understand tha property exists.
An actual object is already assigned to the property by a so called model binder. At any time, in the constructor, in load life cycle, or anytime later on user action you can add a new model if you need. That’s a rare condition, though. Use this code, then:
However, the @ViewModel
decorator is doing exactly this for you, so in case of a new blank instance there is no need to assign a new object to the scope property.
It’s not necessary to keep a reference to the instance, the model binder is doing this internally for you. The derived class is a Proxy
. If you now bind the properties using n-bind
as described below, the model is in sync with the user interface. If you want to programmatically access the current state, just retrieve the model:
If you wish to access the Proxy
at any time in code or not using the binding in templates, this would be sufficient:
The setter of scope
takes an instance, wraps this into a Proxy
, assigns the binders, and the getter returns the Proxy
. Changes to the model will now reflect in bound HTML elements immediately.
View Models
First, you need view models. Then, you decorate the properties with validation and hint decorators.
Why Using View Models?
View Models form an abstraction layer between code and pure user interface. They appear in many architectural pattern, such as Model-View-Control (MVC), Movel-View-ViewModel (MVVM), and similar constructs. A component library such as @nyaf is a different kind of pattern, but the basic need for a model is still valid.
The ViewModel is essential when you want a separation of concerns between your DomainModel (DataModel) and the rest of your code.
Decoupling and separation of concerns is one of the most crucial parts of modern software architectures. Models can contain code, have actions, and work as a distinct translator between the domain model and the view. It’s not the same as the business logic, it’s a layer between such a layer and the user interface (UI). A UI contains logic to control visible elements, such as tooltips, hints, and validation information. All this has no or only a weak relation to an underlying business logic. Mixing the both will create code that is hard to maintain, complex, and difficult to read. In software technology we often talk about so called software entropy. That’s the process of code going to change over time into an even harder to manage form, with a lot of hacks, bells, and whistles nobody understands completely and that wil eventually start to fail. Levels of abstraction help to delay this process (it’s an illusion that you can avoid it entirely). Viewmodels are hence an essential part of a good architecture.
The flux architecture, delivered by the @nyaf/store module, seems to address a similar approach using store models and binding. However, this part is entirely devoted to the business logic. It’s exactly that kind of abstraction we need to make great software not only by great design and an amazing stack of features, but by sheer quality, stability, with maintainable code, hard to break, and almost free of nasty bugs.
Creating a View Model
A view model could look like this:
The last (optional) parameter of the validation decorators is a custom error message.
Validation Decorators
Validation decorators can be used together with the binding. After a binding action took place, usually after a change by the user, the state of the bound model updates and can be retrieved immediately. Elements bound to validation signals can use the state to show/hide elements or control sending data in code. The property names are the same as the respective decorators, just all lower case.
Decorator | Usage |
---|---|
@MaxLength | The maximum length of a text input. |
@MinLength | The minimum length of a text input. |
@Pattern | A regular expression that is used to test the text or number input. |
@Range | A range (from-to) for either numerical values or dates. |
@Required | Makes the field mandatory. |
Checks input against a (very good) regular expression to test for valid e-mail pattern. | |
@Compare | Compares with another field, usually for password comparison. |
@Custom | Provide a static validation function as a callback for any custom validation. |
UI Decorators (property level)
UI decorators control the appearance of elements. Not all have an immediate effect, but it’s very helpful while creating components to have meta data available.
Decorator | Usage |
---|---|
@Display | Determine the label’s name and an (optional) tooltip. |
@DisplayGroup | Groups components in <fieldset> elements. Can be ordered inside the form. |
@Hidden | Makes an hidden field. |
@Sortable | Makes a column sortable in table views. |
@Filterable | Adds a filter in table views. |
@Placeholder | A watermark that appears in empty form fields |
@ReadOnly | Forces a particular render type. |
@TemplateHint | What kind of field (text, number, date, …) and additional styles or classes. |
The UI decorators do not enforce any specific behavior. In fact, they do almost nothing in an application without explicit support. The decorators create hidden properties you can retrieve when building the UI. That way you can control the behavior of the app by setting the decorators. It’s some sort of abstraction between the view model and the UI.
Mapping the Properties
The actual properties are of the form __propName__fieldName, that means, you have to add the decorated property name at the end to retrieve the property specific value. As an example, the displayText property, created by the decorator @Display
, and placed on a property email, can be retrieved by this code:
However, the internal names may change and to avoid any issues a mapping with external names is available. The following table shows the properties the decorators create.
Decorator function | Properties |
---|---|
Display |
text , order , desc
|
DisplayGroup |
grouped , name , order , desc
|
Hidden | is |
Sortable | is |
Filterable | is |
Placeholder |
has , text
|
ReadOnly | is |
TemplateHint |
has , params , name
|
Special Decorators
There is one more decorator that’s not just defining the UI behavior but has some internal behavior.
Decorator | Usage |
---|---|
@Translate | For i18n of components |
This decorator can be placed on top of the view model, on class level, or on a specific property.
The function expects a JSON file with translation instructions. The translation converts the text on the properties to another language:
This decorator is experimental and will change in the near future to reflect a more powerful approach.
Providing the ViewModel
To make a ViewModel accessible you use the @ViewModel(T)
decorator. More about this in the chapter Data Binding.
Data Binding
Data Binding is one of the most important features in component development. It brings an immediate effect in simplifying the data presentation layer. Instead of the chain “select” –> “set” –> “listen” –> “change” you simply bind an object’s properties to an element’s attributes. That could go in both directions.
Template Language Enhancements
@nyaf has a simple template language extension for binding. For forms it’s just one more command for any input element, n-bind
. See the following excerpt from a component.
Now the field knows everything about how to render and how to validate. The first item (“value”) is the HTML element’s property you bind to. The second is the model’s property name (“Name”).
For a good UI you need a label usually:
Terms and Parts
To understand the binding you must know what a view model is and what role the model plays inside the form.
View Model
The actual definition of the model that is bindable is provided through the decorator @ViewModel(T)
. T is a type (class) that has properties decorated with validation and UI decorators. More about this can be found in chapter View Models.
IModel<T> interface
The @ViewModel
decorator creates an instance of the model class. The interface enforces the visibility of the model in the component. The definition is quite easy:
The instance of the modelbinder gives access to all binding features.
Binding Handlers
Binding Handlers are small function calls that handle the data flow between viewmodel property and element attribute. There a few default binding handlers available.
Smart Binders
Instead of using the string form you can use the TSX syntax and binding functions:
-
to
: Generic function to bind a property to the default attribute using a custom binder optionally. -
bind
: Generic function to bind a property to any attribute. -
val
: Bind validation decorators to an attribute. See validation.
See in Section Smart Binders for details.
Creating Forms
The model is provided by the @ViewModel
decorator and the IModel<T>
interface like this:
The form now binds the data. It’s bi-directional or uni-directional depending on the chosen binding handler.
Standard Binding Handlers
The forms module comes with a couple of pre-defined binding handlers:
Name | Key | Direction | Applies to | Base Element |
---|---|---|---|---|
Default… | ‘default’ | uni | attribute | HTMLElement |
Checked… | ‘checked’ | bi | attribute checked
|
HTMLInputElement |
Text… | ‘innerText’ | uni | property textContent
|
HTMLElement |
Value… | ‘value’ | bi | attribute value
|
HTMLInputElement |
Visibility… | ‘visibility’ | uni | style visibility
|
HTMLElement |
Display… | ‘display’ | uni | style display
|
HTMLElement |
The actual handler names are XXXBindingHandler (Default… is actually DefaultBindingHandler). If in the binding attribute the text form is being used (‘innerText: userName’), the key value determines the used handler.
The handler provides the active code that handles the change call and applies the changed value to the right target property. That can be any property the element type supports, directly or indirectly anywhere in the object structure. Such a deeper call happens in the style handlers, especially VisibilityBindingHandler
and DisplayBindingHandler
.
Smart Binders
There is an alternative syntax that provides full type support:
The function to<Type>
from @nyaf/forms module has these syntax variations:
The generic parameters are as follows:
- The view model type. This is mandatory.
- The element type. This is optional, if omitted it falls back to
HTMLElement
.
The parameters are as follows:
- A lambda expression to select a property type safe (
c => c.name
). This is mandatory. - The key of a binding handler. Any property available in
HTMLElement
is allowed (and it’s restricted to these). - The (optional) type of decorator that’s used to pull data from. If it’s omitted, the actual data appear.
Obviously you could think about writing this:
This is rejected by the compiler, because the property value doesn’t exists in HTMLElement
. To provide another type, just use a second generic type parameter:
Here you tell the compiler, that it’s safe to use HTMLInputElement
and so the editor allows value as the second parameter. An even smarter way is to use the lambda here, too:
But, both ways are type safe, even the string is following a constrain. The string is usually shorter, the lambda might use an earlier suggestion from Intellisense.
The binding behavior is tricky but powerful. The intention is to provide rock solid type safety. You must provide an element attribute that really exists to make a successful binding. Everything else wouldn’t make any sense. But to actually bind properly, you must provide a Binding Handler that can handle this particular binding.
Multi Attribute Binding
The n-bind
attribute is exclusive, so you can bind only one attribute. That’s fine for most cases, but sometimes you’ll need multiple bindings. In Angular this is easy through the binding syntax around any element (<input [type]="source" [value]="model">
). However, this would require a template compiler and additional editor support. To overcome the limitations here, the bind
function is available.
In the next example two properties are bound:
The
n-bind
is still required to efficiently trigger the binder logic. It’s now empty, though (default value istrue
internally). Please note that you cannot bind to deeper structures in the current version (e.g.style.border={bind<T>()}
is not possible.) That’s typically a way to bind styles in Angular, but this would violate the rule that standard @nyaf templates shall be standard TSX files that any editor can handle without additional tool support. To support a scenario with style binding, refer to section Custom Binders.
If the binding handler is not provided, it falls back to a DefaultBindingHandler
, that binds uni-directional to the assigned attribute. That has two limitations. First, it’s always uni-directional. Second, it can bind only to attributes of HTMLElement
. Object properties, such as textContent
or innerText
cannot be reached that way. That’s indeed the same with Angular, where you need to encapsulate elements in custom components to reach hidden properties, but in @nyaf there is a much smarter way.
Imagine you’ll bind to whatever, just assign another binding handler.
The binding handler may write into whatever property you like, even those not available as attributes. See section Custom Binders for more details.
Even More Smartness
You may also define your component as a generic. That avoids repeating the model name over and over again. Imagine this:
And in that case use a shorter form to express the binding:
T is a placeholder here. Use any name you like to define a “type”.
That’s cool, isn’t it? Now we have a fully type safe binding definition in the middle of the TSX part without any additions to regular HTML.
And in case you have special properties beyond HTMLElement
, than just provide the proper type like you did before:
This gives full type support in any editor for all types, even custom Web Components will work here.
This technique avoids parsing the template, and the missing parser makes the package so small. The function simply returns a magic string that the model binder class recognizes at runtime. The function call with a generic helps the editor to understand the types and avoids mistakes.
Validation
Form validation is painful to programm from scratch. @nyaf/forms provides a n integrated but flexible validation system.
View Model Decorators
First, you need a viewmodel that has validation decorators. It’s the same kind of model used for regular binding. Again, here is an example:
Especially the validation decorators are in control of the validation (Required
, MaxLength
, and so on). In binding instruction you tell the environment with what decorator a property has to be connected.
State
The validation state is available through state
:
It’s supervised. After the component is rendered the property this.model.state helds the state of the model.
After a binding happens the validators are being executed and the instance values change. You can retrieve the values in a method, or an event handler. To set UI elements interactively, immediately, you again use the n-bind
attribute and the appropriate binding function like to
and bind
.
Bind to Validators
An error message is just regular output (the class values are taken from Bootstrap and they’re not needed by the @nyaf/forms module):
Validators can provide the error text, too. This is driven by decorators. The decorators fall back to a simple plain english error message in case you don’t provide anything. You can, however, provide any kind of message in the decorator. In case you need i18n messages, just add the @Translate
decorator as a parameter decorator to the message parameter.
Distinguish between different validators like this:
The smart binder val
is the key ingredient here. It takes three parameters:
- An expression to access the models actual value.
- A validator for which the binding is responsible (must also be aon the view models property).
- A display handler, that pulls the values and assigns it to the right property.
In the above example the view model has this property:
Now, the binding instruction looks like this:
The DisplayBindingHandler
is smart enough to know that it’s bound to an error message. It now reads the second parameter that is MaxLength
. It binds now these two parts.
First, it binds the error message to textContent
. That’s a static assignment. Second, it binds the display
style to the isValid
method of the view model. This method is set through the MaxLength
decorator and knows how to determine the state ‘maxlength’. The property is bound through the scope’s Proxy dynamically and once the values changes, irrespectively the source of the change, it fires an event and the model binder holds a subscriber for this. Here, the value is taken and handed over to the isValid
method. This method is bound to the handler, that sets the style accordingly. That setting is reversed, means that the value true
makes the message invisible, while the value false
makes the message visible (isValid === false
tells you an error occurred).
If you use the
DisplayBindingHandler
orVisibilityBindingHandler
directly, without validation but in conjunction with binding operations, than they will work straight, true makes an element visible, and false invisible.
Handler Behavior
The DisplayBindingHandler
sets display: none
or display: block
. The VisibilityBindingHandler
sets visibility: hidden
or visibility: visible
. These are the most basic handlers and available out-of-the-box.
If you need other values you must write a new handler with the desired behavior. This is, fortunately, extremely simple. Here is the source code for the handlers:
The Binding
instance, provided internally, delivers a boolean value. The element el is the element that has the n-bind={val<T>()}
instruction. T is the model that drives the content using decorators.
Additional Information
Objects are always set (not undefined), so you don’t must test first. The property names are same as the decorators, but in lower case:
-
@MaxLength
:maxlength
-
@MinLength
:minlength
-
@Pattern
:pattern
-
@Range
:range
-
@Required
:required
-
@EMail
:email
-
@Compare
:compare
Custom Binders
Custom Binders help binding to specific properties. They can be used like the embedded binders, that act just as examples and use the same way.
Implementing a Custom Binder
A custom binder handles the binding procedure when binding a viewmodel property to an element attribute. It consists of three parts:
- The binding setup (
bind
) - The binder into the element (a property change leads to an attribute change)
- The listener (an attribute change event leads to an updated model property)
Step 2 and 3 are both optional, omitting them is leading to a uni-directional binding in one or another direction.
Note here, that despite the base class the decorator @BindName is required. The argument is the name of the class. In the views’ code the binder class’ name can be used to determine the behavior. But in case the project is packed by an aggressive packer, the names of the classes might be minified. The code compares the names and due to different minification steps it could happen that the comparison fails. The decorator writes the name into an internal property and the compare code can retrieve this properly. If the view code uses strings instead of types, using this decorator is not necessary.
How it Works
First, you need to implement IBindingHandler
.
A handler must react to something, but everything else is optional. See this line:
The generic is optional. It allows the definition of the target element. If it’s omitted, it falls back to HTMLElement
. You can use any HTML 5 element type or any custom web component type (as in the example).
The bind
method is called implicitly by the infrastructure. If it doesn’t exist it’s being ignored. The only reason to use it is attaching an event listener. You may also consider calling react
immediately to sync the data, but it depends on the actual behavior of the element and may result in an additional binding process while loading the form.
The method react
is called from the view model proxy instance each time the value changes. Write code here to assign the data to any property of the target element.
The method listener
is optional and is called once the target element raises an event. You can access the original event if provided.
A Simple Binder
How simple a binder can be is shown next with the already embedded uni-directional default binder:
The only difference here is that the ModelBinder
class intercepts the access and delivers the name of the attribute as a second parameter. This is a special behavior and the default handler can handle this.
Note that all examples have almost no error and exception handling. Add this if you want a more robust application.
Installation of Forms Module
Install the package:
The type definitions required for TypeScript are part of the packages and no additional type libraries are required.
Dependencies
This packages depends on @nyaf/lib only.
The Flux Store
This module is the store implementation, a simple flux variant without the burden of Redux. It strictly follows the flux pattern and brings, ones fully understood, a great amount of strict programming style to your application. It brings state to your single page app (SPA). Outside of a SPA it’s not useful.
How it works
It’s very much like Redux, but makes use of decorators to write less code. It’s a good strategy to create one global store in your app. Leave it empty if there are no global actions, but make it global if you have such actions.
Then, define three parts for each implementation:
- Actions that the component offers (such as SEARCH, LOAD, SET, REMOVE, you name it)
- Reducers that are pure function calls that do what your business logic requires (change data, call services)
- A State Object that holds all the data. The reducer can change the state, but nobody else can
In the component you have two tasks:
- Dispatch actions and add payload if required.
- Listen for changes in the store to know when an reducer finished it’s task.
An async load must not be split-up. The calls are async, hence the state change may appear later, but nonetheless it lands in the component eventually.
Actions
Define the capabilities of your app, along with some default or initial value. In this example I use Symbol
to define unique constants that are being used for any further request of an action.
The following figure shows the relevant parts of the action definition:
Why using actions? It’s convenient to have typed constants in the editor and use easy to remember names without the chance to create mistakenly typos.
Reducer
Define, what happens if an action is being dispatched:
The following figure shows the relevant parts of the reducer definition:
The returned payload is the whole store object by reference. The type for the store is optional and helps elevating the power of TypeScript and getting a type safe store.
Why using reducers at all? Pure function calls are the foundation of a side effect free business layer. You have exactly one location where the logic goes - the reducer. That’s said, from now on you will know where to have logic, where to have UI, and where to store everything.
Reducers can be sync or async, every function can be made as you like.
Return Value Considerations
The return value is an object that contains the fragments of the store that need to be changed. Through subscriptions this is the way to inform other instances that something happened. But be careful with setting multiple values in one single step. The store logic will execute property by property and immediately publish the change event. A subscriber will receive the changes in the exact order of the properties in the reducers returns value. If the subscriber receives the first property’s change event, the ne value is provided. However, the remaining values are not yet set, and hence the store is in an intermediate state. You must wait for all subscribers to get their final values. The best way to avoid hassle here is to avoid returning multiple values from a single reducer function.
Store and Dispatcher
The store holds the state, provides a dispatch function and fires events in case a store value changes. First, the store can by defined by types, but this is an option and you may decide to go with a simple object just for the sake of simplicity.
The example shows a store that consists of fragments. This allows one to use parts of the store just by using the type fragments.
Now see the usage within a component. First, you must configure the store with the elements written before. As shown it’s easy to combine reducers and add the various actions. To have the state typed a generic is being used.
Use the Store
Now make the store constant available in the component, if it’s not yet defined there. This store can handle just one single component or spread multiple components and form eventually a single source of truth for the whole application.
Pro Tip! Combine this example with the forms module (@nyaf/forms) and get binding on element level using the
n-bind
template feature.
Type Handling in Typescript
The store has these basic parts as described before:
- Actions
- Reducer
- Store and Store Types
The Actions are basically string constants. The reducers get payload that’s anything. The return value is the Store Type.
The store has two basic function:
- dispatch
- subscribe
You dispatch an Action along with a payload. So, the types are string
and any
.
When you receive a store event from a subscribe this subscription watches for changes of a part of the Store Type. The event handler receives the whole store, then.
Example
Assume we deal with a CRUD (Create, Read, Update, Delete) component using a custom model like this:
The decorators are from the @nyaf/forms project.
Now, some actions are required:
Also, some reducers doing the hard work:
DatabaseService.instance.instance is a service class with singleton pattern. It executes SQL. $sql provides the statements from a resource file.
The store summarizes all this for easy processing:
Now, the component can dispatch actions with payloads and receive store changes.
The reducer receives the ALL action. It pulls all the data and sets the gridResult object. The subscriber listens for this and can handle the data (re-render, for example).
The essential part is here that the return value of the subscriber is always the Store Type (here archiveStoreType). So you don’t need to think about the current type and TypeScript resolves the types within properly. However, the subscriber is for just one property of the store and only changes of this property will trigger the handler. To get the data, access it like this:
The underlying object is Proxy
, not your actual type.
Global and Local Store
Technically there is just one store. But logically you will usually split the access into a global store (per app or module) and a local one - per component.
Merge Strategy
Within a component the stores are being merged and appears as one unit afterwards.
Disposing
Some event handlers and especially the store environment need a proper removing of handlers. This happens in the dispose
method you can
override in the component.
Example
This is how it looks like:
Even easier is usage of the Dispose
decorator like this:
You can now remove the dispose
method entirely.
General Usage
The @Dispose
decorator is defined in the base library and not limited to store actions.
Effects Mapping
A component is basically just an user interface (UI), that is defined by HTML. This UI can be dynamic in both directions, receiving user actions and react to changes in an underlying data model. In business components this leads to a significant amount of code, that is primarily just a reference to the coding environment. For user actions it’s a number of event hooks leading to handlers. For data changes it’s the binding to a model and code to monitor changes.
The Flux store reduces the amount of code by moving the actual business logic to the reducer functions. That a big progress compared to traditional programming styles, but the remaining definitions are now only skeletons to function calls. It would be great to have these function calls reduced to the bare minimum of code and, in the same step, collected in one single definition just like the reducers. This features exists in @nyaf and it’s called Effects.
The Effects Decorator
The decorator exists once on a component. The API looks like this:
It makes only sense in conjunction with the store itself. This is how it goes with a real component:
Imagine this code in a component without effects:
The whole purpose of the code is to add a click event and trigger the dispatcher. Effects move both parts outside of the component and you can remove the code entirely. The view becomes simpler and the component is smaller and less error-prone.
Using the Effects Decorator
To keep the handler stuff outside the component and still connected to one we use a decorator. The following example gives you an impression how this could look like:
The decorator accepts an array of objects of type Effect
. This type is an interface that has the following API:
-
selector
: A string that can be handled byquerySelectorAll
. -
trigger
: A string that is one of the common ECMAScript events an element can fire or any custom event name. -
action
: A string constant that the store’s reducer accept in a dispatch call. -
parameter
: An (optional) function that retrieves a value from the event handler parameter.
The Selector
The selector is a string that can be handled by querySelectorAll
. This is mandatory and the selector must return at least one element. The selector is executed after the life cycle event state Load
. That means, that the render
function is executed. The selector will not get any elements you add later dynamically.
To avoid any conflicts it’s strongly recommended to use data- attributes and avoid any CSS stuff to select elements, especially not the class
attribute.
The Trigger
The trigger is a string that is one of the common ECMAScript events an element can fire or any custom event name. To support IntelliSense a number of common events is part of the definition, but technically it’s allowed to use any string here. Internally the event is attached to the outcome of the selector by using addEventHandler(trigger)
.
The Action Definition
The action is a string constant that the store’s reducer accept in a dispatch call. It’s recommended to use the action constants and not provide any string values here directly. The binder class that handles this internally will throw an exception in case the action is not known by the store.
The Parameter
The Parameter is a function that retrieves a value from the event handler parameter. This is the only optional value. You can omit it in case the dispatched action does not need a payload. For all other reducer calls this function returns a value that’s being used as the payload.
The returned type is always any
without further restrictions.
The input parameter is the event’s parameter object. In most cases it’s of type Event
, or KeyEvent
, or MouseEvent
. In case of a custom component it could be CustomEvent
. To retrieve values the best way is to access the source element by using this code snippet:
To provide dynamic values the data- attributes are a robust way doing so. To access the values directly use the dataset
property. In case your attribute is further divided in sections using the kebab-style (such as in data-action-value) the dataset
property converts this into camel case (action-value transforms into actionValue). But, you can use any other property of the event source to set values. You can also simply use static values, too. However, even if technically possible, it’s not recommended to add any business logic or validation code in this functions. Move such code consequently to the reducer. Otherwise you logic code will be split-up and becomes very hard to maintain. The reason the @Effects
decorator exists is just to get a more rigid structure in your app.
Automatic Updates
The @Updates
decorator complements the @Effects
decorator. The schema is similar. However, both decorators work independently of each other and you can use any or both.
The Updates Decorator
The decorator exists once on a component. The API looks like this:
It makes only sense in conjunction with the store itself (a missing store will throw an exception). This is how it goes with a real component:
The piece of HTML that this @Update
decorator setting addresses is shown below:
There is no additional code required to update the HTML. Once a change in the store ocurred the value is pulled from the store and written into the selected property. In the example the store’s value ‘counter’ is monitored by a subscription. The elements are being selected once in the lifecycle state Load
. Further changes of the component’s DOM are not being processed. The access works with or without shadow DOM. The element’s selector in the example is ‘[data-store-counter]’. You can use any selector querySelectorAll
would accept. If there are multiple elements indeed, the assignment will happen multiple times. The target property is ‘textContent’. You can use any property that the selected element or component supports. Be aware that the access is property access on code level. That means, a virtual attribute of a component will not work, because it’s not change in markup. If you have a component as the target, and wish to write a value to an observed attribute, you must introduce getter and setter methods to support the @Update
decorator.
In the example the store definition (using @ProvideStore
) and the update configuration (using @Updates
) use the very same store type. That’s not necessary. If the store type is a combined type (as in the example code you can find on Github), consider using one of its partial types for the update to shrink the selection to the part you really need. This avoids errors and improves the readability of your code.
Using the Updates Decorator
The following code shows the typical store subscription, usually assigned in the constructor:
The value is written in a freshly selected element. The very same result can be achieved with the following code:
While this not seem to be a big advantage (in fact, 2 lines more), the real reason is to avoid any code in the component directly, making it pure view. On the long term this creates clean code and helps to make a ore systematic structure.
The decorator accepts an array of objects of type Update
. This type is an interface that has the following API:
-
selector
: A string that can be handled byquerySelectorAll
. -
store
: A string that is one of the properties of the store type. This is managed by the generic. -
target
: A string that’s the name of a property the selected element or component supports. This is not being checked by TypeScript.
The Selector
The selector is a string that can be handled by querySelectorAll
. This is mandatory and the selector must return at least one element. The selector is executed after the life cycle event state Load
. That means, that the render
function is executed. The selector will not get any elements you add later dynamically.
To avoid any conflicts it’s strongly recommended to use data- attributes and avoid any CSS stuff to select elements, especially not the class
attribute.
The Store
The store has usually a type definition to define the fundamental structure. Usually it’s just an interface. The generic provides this type definition and you can choose any of these properties. Internally it’s a keyof T
definition.
The Target
Because you can’t write a value straight into an element or into a component you must define a specific property. The type is either any of the properties supported by HTMLElement
or just a string. This is weak from standpoint of Intellisense (in fact, there is actually no check at all), but flexible enough to support all common scenarios.
It’s a trade-off between convenience and security. A more rigid approach would require a generic on the level of the Update
interface. But with an anonymous type definition you can’t provide a generic. That means you need to add an additional type information. In the end it’s a lot more boilerplate code for a little safety. That’s the reason while the definition can be made so simple.
Installation
Install the store package like this:
The type definitions required for TypeScript are part of the packages and no additional type libraries are required.
Dependencies
The one and only dependency is the core library, @nyaf/lib.
Notes
Introduction
1A single logical unit shall never have more than 100 lines of code, comments and empty lines included.↩
2The official name of JavaScript is ECMAScript.↩
3This is the default name. You can use any name if you like.↩
Making Components
1See at Wikipedia for details about the origin of this word. In short, it’s meat on a skewer. Take the letters as meat the dashes are the skewer.↩
2This requires ECMAScript 2015 support, which is browser native. Be aware of this in case you have an environment that still downgrades to ES5 for some historical reason.↩