diff --git a/7.x/search/search_index.json b/7.x/search/search_index.json index 99c2c68a8..1f3509d54 100644 --- a/7.x/search/search_index.json +++ b/7.x/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Welcome","text":"
Welcome to the Community Solid Server! Here we will cover many aspects of the server, such as how to propose changes, what the architecture looks like, and how to use many of the features the server provides.
The documentation here is still incomplete both in content and structure, so feel free to open a discussion about things you want to see added. While we try to update this documentation together with updates in the code, it is always possible we miss something, so please report it if you find incorrect information or links that no longer work.
An introductory tutorial that gives a quick overview of the Solid and CSS basics can be found here. This is a good way to get started with the server and its setup.
If you want to know what is new in the latest version, you can check out the release notes for a high level overview and information on how to migrate your configuration to the next version. A list that includes all minor changes can be found in the changelog
"},{"location":"#using-the-server","title":"Using the server","text":"For core developers with push access only:
Below is a non-exhaustive listing of the features available to a server instance, depending on the chosen configuration. The core feature of the CSS is that it uses dependency injection to configure its components, so any of the features below can always be adapted or replaced with custom components if required. It can also be used to configure dummy components for debugging, development, or experimentation purposes. See this tutorial and/or this example repository for more information on that.
To generate configurations with some of these features enabled/disabled, you can use the configuration generator.
"},{"location":"features/#authentication","title":"Authentication","text":"Clients are identified based on the contents of DPoP tokens, as described in the Solid-OIDC specification.
The server also provides several dummy components that can be used here, to either always identify the client as a fixed WebID, or to allow the WebID to be set directly in the Authorization header. These can be configured by changing the ldp/authentication import in your configuration.
Two authorization mechanisms are implemented for determining who has access to resources:
Alternatively, the server can be configured to not have any kind of authorization and allow full access to all resources.
"},{"location":"features/#solid-protocol","title":"Solid Protocol","text":"The Solid Protocol is supported.
Requests to the server support content negotiation for common RDF formats.
Binary range headers are supported.
ETag and Last-Modified headers are supported. These can be used for conditional requests.
PATCH requests targeting RDF resources can be made with N3 Patch or SPARQL Update bodies.
The server can be configured to store data in memory, on the file system, or through a SPARQL endpoint. Similarly, the locking system that is used to prevent data conflicts can be configured to store locks in memory, on the file system, or in a Redis store, or it can be disabled.
Multiple worker threads can be used when starting the server.
"},{"location":"features/#account-management","title":"Account management","text":"Accounts can be created on the server with which users can perform the following actions, through either a JSON or an HTML API:
Using these accounts, the server can generate tokens to support Solid-OIDC authentication.
"},{"location":"features/#pods","title":"Pods","text":"The server keeps track of the pod owners, which is a list of WebIDs which have full control access over all resources contained within. Owners can be added to and removed from a pod.
Pod URLs can be minted as either subdomain, http://pod.example.com/, or suffix, http://example.com/pod/.
When starting the server, a configuration file can be provided to immediately create one or more accounts on the server with their own pods. See the documentation for more information.
"},{"location":"features/#notifications","title":"Notifications","text":"CSS supports v0.2.0 of the Solid Notifications Protocol. Specifically it supports the Notification Types WebSocketChannel2023 and WebhookChannel2023.
More documentation on notifications can be found here.
"},{"location":"architecture/core/","title":"Core building blocks","text":"There are several core building blocks used in many places of the server. These are described here.
"},{"location":"architecture/core/#handlers","title":"Handlers","text":"A very important building block that gets reused in many places is the AsyncHandler. The idea is that a handler has 2 important functions. canHandle determines if this class is capable of correctly handling the request, and throws an error if it can not. For example, a class that converts JSON-LD to turtle can handle all requests containing JSON-LD data, but does not know what to do with a request that contains a JPEG. The second function is handle where the class executes on the input data and returns the result. If an error gets thrown here it means there is an issue with the input. For example, if the input data claims to be JSON-LD but is actually not.
The power of using this interface really shines when using certain utility classes. The one we use the most is the WaterfallHandler, which takes as input a list of handlers of the same type. The input and output of a WaterfallHandler is the same as those of its inputs, meaning it can be used in the same places. When doing a canHandle call, it will iterate over all its input handlers to find the first one where the canHandle call succeeds, and when calling handle it will return the result of that specific handler. This allows us to chain together many handlers that each have their specific niche, such as handler that each support a specific HTTP method (GET/PUT/POST/etc.), or handlers that only take requests targeting a specific subset of URLs. To the parent class it will look like it has a handler that supports all methods, while in practice it will be a WaterfallHandler containing all these separate handlers.
Some other utility classes are the ParallelHandler that runs all handlers simultaneously, and the SequenceHandler that runs all of them one after the other. Since multiple handlers are executed here, these only work for handlers that have no output.
Almost all data is handled in a streaming fashion. This allows us to work with very large resources without having to fully load them in memory, a client could be reading data that is being returned by the server while the server is still reading the file. Internally this means we are mostly handling data as Readable objects. We actually use Guarded<Readable> which is an internal format we created to help us with error handling. Such streams can be created using utility functions such as guardStream and guardedStreamFrom. Similarly, we have a pipeSafely to pipe streams in such a way that also helps with errors.
The community server uses the dependency injection framework Components.js to link all class instances together, and uses Components-Generator.js to automatically generate the necessary description configurations of all classes. This framework allows us to configure our components in a JSON file. The advantage of this is that changing the configuration of components does not require any changes to the code, as one can just change the default configuration file, or provide a custom configuration file.
More information can be found in the Components.js documentation, but a summarized overview can be found below.
"},{"location":"architecture/dependency-injection/#component-files","title":"Component files","text":"Components.js requires a component file for every class you might want to instantiate. Fortunately those get generated automatically by Components-Generator.js. Calling npm run build will call the generator and generate those JSON-LD files in the dist folder. The generator uses the index.ts, so new classes always have to be added there or they will not get a component file.
Configuration files are how we tell Components.js which classes to instantiate and link together. All the community server configurations can be found in the config folder. That folder also contains information about how different pre-defined configurations can be used.
A single component in such a configuration file might look as follows:
{\n \"comment\": \"Storage used for account management.\",\n \"@id\": \"urn:solid-server:default:AccountStorage\",\n \"@type\": \"JsonResourceStorage\",\n \"source\": { \"@id\": \"urn:solid-server:default:ResourceStore\" },\n \"baseUrl\": { \"@id\": \"urn:solid-server:default:variable:baseUrl\" },\n \"container\": \"/.internal/accounts/\"\n}\n With the corresponding constructor of the JsonResourceStorage class:
public constructor(source: ResourceStore, baseUrl: string, container: string)\n The important elements here are the following:
\"comment\": (optional) A description of this component.\"@id\": (optional) A unique identifier of this component, which allows it to be used as parameter values in different places.\"@type\": The class name of the component. This must be a TypeScript class name that is exported via index.ts.As you can see from the constructor, the other fields are direct mappings from the constructor parameters. source references another object, which we refer to using its identifier urn:solid-server:default:ResourceStore. baseUrl is just a string, but here we use a variable that was set before calling Components.js which is why it also references an @id. These variables are set when starting up the server, based on the command line parameters.
The initial architecture document the project was started from can be found here. Many things have been added since the original inception of the project, but the core ideas within that document are still valid.
As can be seen from the architecture, an important idea is the modularity of all components. No actual implementations are defined there, only their interfaces. Making all the components independent of each other in such a way provides us with an enormous flexibility: they can all be replaced by a different implementation, without impacting anything else. This is how we can provide many different configurations for the server, and why it is impossible to provide ready solutions for all possible combinations.
"},{"location":"architecture/overview/#architecture-diagrams","title":"Architecture diagrams","text":"Having a modular architecture makes it more difficult to give a complete architecture overview. We will limit ourselves to the more commonly used default configurations we provide, and in certain cases we might give examples of what differences there are based on what configurations are being imported.
To do this we will make use of architecture diagrams. We will use an example below to explain the formatting used throughout the architecture documentation:
flowchart TD\n LdpHandler(\"<strong>LdpHandler</strong><br>ParsingHttphandler\")\n LdpHandler --> LdpHandlerArgs\n\n subgraph LdpHandlerArgs[\" \"]\n RequestParser(\"<strong>RequestParser</strong><br>BasicRequestParser\")\n Auth(\"<br>AuthorizingHttpHandler\")\n ErrorHandler(\"<strong>ErrorHandler</strong><br><i>ErrorHandler</>\")\n ResponseWriter(\"<strong>ResponseWriter</strong><br>BasicResponseWriter\")\n end Below is a summary of how to interpret such diagrams:
urn:solid-server:default: to the shorthand identifier.For example, in the above, LdpHandler is a shorthand for the actual identifier urn:solid-server:default:LdpHandler and is an instance of ParsingHttpHandler. It has 4 parameters, one of which has no identifier but is an instance of AuthorizingHttpHandler.
Below are the sections that go deeper into the features of the server and how those work.
When starting the server, the application actually uses Components.js twice to instantiate components. The first instantiation is used to parse the command line arguments. These then get converted into Components.js variables and are used to instantiate the actual server.
"},{"location":"architecture/features/cli/#architecture","title":"Architecture","text":"flowchart TD\n CliResolver(\"<strong>CliResolver</strong><br>CliResolver\")\n CliResolver --> CliResolverArgs\n\n subgraph CliResolverArgs[\" \"]\n CliExtractor(\"<strong>CliExtractor</strong><br>YargsCliExtractor\")\n ShorthandResolver(\"<strong>ShorthandResolver</strong><br>CombinedShorthandResolver\")\n end\n\n ShorthandResolver --> ShorthandResolverArgs\n subgraph ShorthandResolverArgs[\" \"]\n BaseUrlExtractor(\"<br>BaseUrlExtractor\")\n KeyExtractor(\"<br>KeyExtractor\")\n AssetPathExtractor(\"<br>AssetPathExtractor\")\n end The CliResolver (urn:solid-server-app-setup:default:CliResolver) is simply a way to combine both the CliExtractor (urn:solid-server-app-setup:default:CliExtractor) and ShorthandResolver (urn:solid-server-app-setup:default:ShorthandResolver) into a single object and has no other function.
Which arguments are supported and which Components.js variables are generated can depend on the configuration that is being used. For example, for an HTTPS server additional arguments will be needed to specify the necessary key/cert files.
"},{"location":"architecture/features/cli/#cliresolver","title":"CliResolver","text":"The CliResolver converts the incoming string of arguments into a key/value object. By default, a YargsCliExtractor is used, which makes use of the yargs library and is configured similarly.
The ShorthandResolver uses the key/value object that was generated above to generate Components.js variable bindings. A CombinedShorthandResolver combines the results of multiple ShorthandExtractor by mapping their values to specific variables. For example, a BaseUrlExtractor will be used to extract the value for baseUrl, or port if no baseUrl value is provided, and use it to generate the value for the variable urn:solid-server:default:variable:baseUrl.
These extractors are also where the default values for the server are defined. For example, BaseUrlExtractor will be instantiated with a default port of 3000 which will be used if no port is provided.
The variables generated here will be used to initialize the server.
"},{"location":"architecture/features/http-handler/","title":"Handling HTTP requests","text":"The direction of the arrows was changed slightly here to make the graph readable.
flowchart LR\n HttpHandler(\"<strong>HttpHandler</strong><br>SequenceHandler\")\n HttpHandler --> HttpHandlerArgs\n\n subgraph HttpHandlerArgs[\" \"]\n direction LR\n Middleware(\"<strong>Middleware</strong><br><i>HttpHandler</i>\")\n WaterfallHandler(\"<br>WaterfallHandler\")\n end\n\n Middleware --> WaterfallHandler\n WaterfallHandler --> WaterfallHandlerArgs\n\n subgraph WaterfallHandlerArgs[\" \"]\n direction TB\n StaticAssetHandler(\"<strong>StaticAssetHandler</strong><br>StaticAssetHandler\")\n OidcHandler(\"<strong>OidcHandler</strong><br><i>HttpHandler</i>\")\n NotificationHttpHandler(\"<strong>NotificationHttpHandler</strong><br><i>HttpHandler</i>\")\n StorageDescriptionHandler(\"<strong>StorageDescriptionHandler</strong><br><i>HttpHandler</i>\")\n AuthResourceHttpHandler(\"<strong>AuthResourceHttpHandler</strong><br><i>HttpHandler</i>\")\n IdentityProviderHttpHandler(\"<strong>IdentityProviderHttpHandler</strong><br><i>HttpHandler</i>\")\n LdpHandler(\"<strong>LdpHandler</strong><br><i>HttpHandler</i>\")\n end\n\n StaticAssetHandler --> OidcHandler\n OidcHandler --> NotificationHttpHandler\n NotificationHttpHandler --> StorageDescriptionHandler\n StorageDescriptionHandler --> AuthResourceHttpHandler\n AuthResourceHttpHandler --> IdentityProviderHttpHandler\n IdentityProviderHttpHandler --> LdpHandler The HttpHandler is responsible for handling an incoming HTTP request. The request will always first go through the Middleware, where certain required headers will be added such as CORS headers.
After that it will go through the list in the WaterfallHandler to find the first handler that understands the request, with the LdpHandler at the bottom being the catch-all default.
The urn:solid-server:default:StaticAssetHandler matches exact URLs to static assets which require no further logic. An example of this is the favicon, where the /favicon.ico URL is directed to the favicon file at /templates/images/favicon.ico. It can also map entire folders to a specific path, such as /.well-known/css/styles/ which contains all stylesheets.
The urn:solid-server:default:OidcHandler handles all requests related to the Solid-OIDC specification. The OIDC component is configured to work on the /.oidc/ subpath, so this handler catches all those requests and sends them to the internal OIDC library that is used.
The urn:solid-server:default:NotificationHttpHandler catches all notification subscription requests. By default these are requests targeting /.notifications/. Which specific subscription type is targeted is then based on the next part of the URL.
The urn:solid-server:default:StorageDescriptionHandler returns the relevant RDF data for requests targeting a storage description resource. It does this by knowing which URL suffix is used for such resources, and verifying that the associated container is an actual storage container.
The urn:solid-server:default:AuthResourceHttpHandler is identical to the urn:solid-server:default:LdpHandler which will be discussed below, but only handles resources relevant for authorization.
In practice this means that if your server is configured to use Web Access Control for authorization, this handler will catch all requests targeting .acl resources.
The reason these already need to be handled here is so these can also be used to allow authorization on the following handler(s). More on this can be found in the identity provider documentation
"},{"location":"architecture/features/http-handler/#identityproviderhttphandler","title":"IdentityProviderHttpHandler","text":"The urn:solid-server:default:IdentityProviderHttpHandler handles everything related to our custom identity provider API, such as registering, logging in, returning the relevant HTML pages, etc. All these requests are identified by being on the /.account/ subpath. More information on the API can be found in the identity provider documentation The architectural overview can be found here.
Once a request reaches the urn:solid-server:default:LdpHandler, the server assumes this is a standard Solid request according to the Solid protocol. A detailed description of what happens then can be found here
When starting the server, multiple Initializers trigger to set up everything correctly, the last one of which starts listening to the specified port. Similarly, when stopping the server several Finalizers trigger to clean up where necessary, although the latter only happens when starting the server through code.
"},{"location":"architecture/features/initialization/#app","title":"App","text":"flowchart TD\n App(\"<strong>App</strong><br>App\")\n App --> AppArgs\n\n subgraph AppArgs[\" \"]\n Initializer(\"<strong>Initializer</strong><br><i>Initializer</i>\")\n AppFinalizer(\"<strong>Finalizer</strong><br><i>Finalizer</i>\")\n end App (urn:solid-server:default:App) is the main component that gets instantiated by Components.js. Every other component should be able to trace an instantiation path back to it if it also wants to be instantiated.
It's only function is to contain an Initializer and Finalizer which get called by calling start/stop respectively.
flowchart TD\n Initializer(\"<strong>Initializer</strong><br>SequenceHandler\")\n Initializer --> InitializerArgs\n\n subgraph InitializerArgs[\" \"]\n direction LR\n LoggerInitializer(\"<strong>LoggerInitializer</strong><br>LoggerInitializer\")\n PrimaryInitializer(\"<strong>PrimaryInitializer</strong><br>ProcessHandler\")\n WorkerInitializer(\"<strong>WorkerInitializer</strong><br>ProcessHandler\")\n end\n\n LoggerInitializer --> PrimaryInitializer\n PrimaryInitializer --> WorkerInitializer The very first thing that needs to happen is initializing the logger. Before this other classes will be unable to use logging.
The PrimaryInitializer will only trigger once, in the primary worker thread, while the WorkerInitializer will trigger for every worker thread. Although if your server setup is single-threaded, which is the default, there is no relevant difference between those two.
flowchart TD\n PrimaryInitializer(\"<strong>PrimaryInitializer</strong><br>ProcessHandler\")\n PrimaryInitializer --> PrimarySequenceInitializer(\"<strong>PrimarySequenceInitializer</strong><br>SequenceHandler\")\n PrimarySequenceInitializer --> PrimarySequenceInitializerArgs\n\n subgraph PrimarySequenceInitializerArgs[\" \"]\n direction LR\n CleanupInitializer(\"<strong>CleanupInitializer</strong><br>SequenceHandler\")\n PrimaryParallelInitializer(\"<strong>PrimaryParallelInitializer</strong><br>ParallelHandler\")\n WorkerManager(\"<strong>WorkerManager</strong><br>WorkerManager\")\n end\n\n CleanupInitializer --> PrimaryParallelInitializer\n PrimaryParallelInitializer --> WorkerManager The above is a simplification of all the initializers that are present in the PrimaryInitializer as there are several smaller initializers that also trigger but are less relevant here.
The CleanupInitializer is an initializer that cleans up anything that might have remained from a previous server start and could impact behaviour. Relevant components in other parts of the configuration are responsible for adding themselves to this array if needed. An example of this is file-based locking components which might need to remove any dangling locking files.
The PrimaryParallelInitializer can be used to add any initializers to that have to happen in the primary process. This makes it easier for users to add initializers by being able to append to its handlers.
The WorkerManager is responsible for setting up the worker threads, if any.
flowchart TD\n WorkerInitializer(\"<strong>WorkerInitializer</strong><br>ProcessHandler\")\n WorkerInitializer --> WorkerSequenceInitializer(\"<strong>WorkerSequenceInitializer</strong><br>SequenceHandler\")\n WorkerSequenceInitializer --> WorkerSequenceInitializerArgs\n\n subgraph WorkerSequenceInitializerArgs[\" \"]\n direction LR\n WorkerParallelInitializer(\"<strong>WorkerParallelInitializer</strong><br>ParallelHandler\")\n ServerInitializer(\"<strong>ServerInitializer</strong><br>ServerInitializer\")\n end\n\n WorkerParallelInitializer --> ServerInitializer The WorkerInitializer is quite similar to the PrimaryInitializer but triggers once per worker thread. Like the PrimaryParallelInitializer, the WorkerParallelInitializer can be used to add any custom initializers that need to run.
The ServerInitializer is the initializer that finally starts up the server by listening to the relevant port, once all the initialization described above is finished. It takes as input 2 components: a HttpServerFactory and a ServerListener.
flowchart TD\n ServerInitializer(\"<strong>ServerInitializer</strong><br>ServerInitializer\")\n ServerInitializer --> ServerInitializerArgs\n\n subgraph ServerInitializerArgs[\" \"]\n direction LR\n ServerFactory(\"<strong>ServerFactory</strong><br>BaseServerFactory\")\n ServerListener(\"<strong>ServerListener</strong><br>ParallelHandler\")\n end\n\n ServerListener --> HandlerServerListener(\"<strong>HandlerServerListener</strong><br>HandlerServerListener\")\n\n HandlerServerListener --> HttpHandler(\"<strong>HttpHandler</strong><br><i>HttpHandler</i>\") The HttpServerFactory is responsible for starting a server on a given port. Depending on the configuration this can be an HTTP or an HTTPS server. The created server emits events when it receives requests.
A ServerListener is a class that takes the created server as input and attaches a listener to interpret events. One listener that is always used is the urn:solid-server:default:HandlerServerListener, which calls an HttpHandler to resolve HTTP requests.
Sometimes it is necessary to add additional listeners, these can then be added to the urn:solid-server:default:ServerListener as it is a ParallellHandler. An example of this is when WebSockets are used to handle notifications.
This section covers the architecture used to support the Notifications protocol as described in https://solidproject.org/TR/2022/notifications-protocol-20221231.
There are three core architectural components, that have distinct entry points:
Discovery is done through the storage description resource(s). The server returns the same triples for every such resource as the notification subscription URL is always located in the root of the server.
flowchart LR\n StorageDescriptionHandler(\"<br>StorageDescriptionHandler\")\n StorageDescriptionHandler --> StorageDescriber(\"<strong>StorageDescriber</strong><br>ArrayUnionHandler\")\n StorageDescriber --> NotificationDescriber(\"NotificationDescriber<br>NotificationDescriber\")\n NotificationDescriber --> NotificationDescriberArgs\n\n subgraph NotificationDescriberArgs[\" \"]\n direction LR\n NotificationChannelType(\"<br>NotificationChannelType\")\n NotificationChannelType2(\"<br>NotificationChannelType\")\n end The server uses a StorageDescriptionHandler to generate the necessary RDF data and to handle content negotiation. To generate the data we have multiple StorageDescribers, whose results get merged together in an ArrayUnionHandler.
A NotificationChannelType contains the specific details of a specification notification channel type, including a JSON-LD representation of the corresponding subscription resource. One specific instance of a StorageDescriber is a NotificationSubcriber, which merges those JSON-LD descriptions into a single set of RDF quads. When adding a new subscription type, a new instance of such a class should be added to the urn:solid-server:default:StorageDescriber.
To subscribe, a client has to send a specific JSON-LD request to the URL found during discovery.
flowchart LR\n NotificationTypeHandler(\"<strong>NotificationTypeHandler</strong><br>WaterfallHandler\")\n NotificationTypeHandler --> NotificationTypeHandlerArgs\n\n subgraph NotificationTypeHandlerArgs[\" \"]\n direction LR\n OperationRouterHandler(\"<br>OperationRouterHandler\") --> NotificationSubscriber(\"<br>NotificationSubscriber\")\n NotificationChannelType --> NotificationChannelType(\"<br><i>NotificationChannelType</i>\")\n OperationRouterHandler2(\"<br>OperationRouterHandler\") --> NotificationSubscriber2(\"<br>NotificationSubscriber\")\n NotificationChannelType2 --> NotificationChannelType2(\"<br><i>NotificationChannelType</i>\")\n end Every subscription type should have a subscription URL relative to the root notification URL, which in our configs is set to /.notifications/. For every type there is then a OperationRouterHandler that accepts requests to that specific URL, after which a NotificationSubscriber handles all checks related to subscribing, for which it uses a NotificationChannelType. If the subscription is valid and has authorization, the results will be saved in a NotificationChannelStorage.
flowchart TB\n ListeningActivityHandler(\"<strong>ListeningActivityHandler</strong><br>ListeningActivityHandler\")\n ListeningActivityHandler --> ListeningActivityHandlerArgs\n\n subgraph ListeningActivityHandlerArgs[\" \"]\n NotificationChannelStorage(\"<strong>NotificationChannelStorage</strong><br><i>NotificationChannelStorage</i>\")\n ResourceStore(\"<strong>ResourceStore</strong><br><i>ActivityEmitter</i>\")\n NotificationHandler(\"<strong>NotificationHandler</strong><br>WaterfallHandler\")\n end\n\n NotificationHandler --> NotificationHandlerArgs\n subgraph NotificationHandlerArgs[\" \"]\n direction TB\n NotificationHandler1(\"<br><i>NotificationHandler</i>\")\n NotificationHandler2(\"<br><i>NotificationHandler</i>\")\n end An ActivityEmitter is a class that emits events every time data changes in the server. The MonitoringStore is an implementation of this in the server. The ListeningActivityHandler is the class that listens to these events and makes sure relevant notifications get sent out.
It will pull the relevant subscriptions from the storage and call the stored NotificationHandler for each of time. For every subscription type, a NotificationHandler should be added to the WaterfallHandler that handles notifications for the specific type.
To add support for WebSocketChannel2023 notifications, components were added as described in the documentation above.
For discovery, a NotificationDescriber was added with the corresponding settings.
As NotificationChannelType, there is a specific WebSocketChannel2023Type that contains all the necessary information.
As NotificationHandler, the following architecture is used:
flowchart TB\n TypedNotificationHandler(\"<br>TypedNotificationHandler\")\n TypedNotificationHandler --> ComposedNotificationHandler(\"<br>ComposedNotificationHandler\")\n ComposedNotificationHandler --> ComposedNotificationHandlerArgs\n\n subgraph ComposedNotificationHandlerArgs[\" \"]\n direction LR\n BaseNotificationGenerator(\"<strong>BaseNotificationGenerator</strong><br><i>NotificationGenerator</i>\")\n BaseNotificationSerializer(\"<strong>BaseNotificationSerializer</strong><br><i>NotificationSerializer</i>\")\n WebSocket2023Emitter(\"<strong>WebSocket2023Emitter</strong><br>WebSocket2023Emitter\")\n BaseNotificationGenerator --> BaseNotificationSerializer --> WebSocket2023Emitter\n end A TypedNotificationHandler is a handler that can be used to filter out subscriptions for a specific type, making sure only WebSocketChannel2023 subscriptions will be handled.
A ComposedNotificationHandler combines 3 interfaces to handle the notifications:
NotificationGenerator converts the information into a Notification object.NotificationSerializer converts a Notification object into a serialized Representation.NotificationEmitter takes a Representation and sends it out in a way specific to that subscription type.urn:solid-server:default:BaseNotificationGenerator is a generator that fills in the default Notification template, and also caches the result so it can be reused by multiple subscriptions.
urn:solid-server:default:BaseNotificationSerializer converts the Notification to a JSON-LD representation and handles any necessary content negotiation based on the accept notification feature.
A WebSocket2023Emitter is a specific emitter that checks whether the current open WebSockets correspond to the subscription.
flowchart TB\n WebSocket2023Listener(\"<strong>WebSocket2023Listener</strong><br>WebSocket2023Listener\")\n WebSocket2023Listener --> WebSocket2023ListenerArgs\n\n subgraph WebSocket2023ListenerArgs[\" \"]\n direction LR\n NotificationChannelStorage(\"<strong>NotificationChannelStorage</strong><br>NotificationChannelStorage\")\n SequenceHandler(\"<br>SequenceHandler\")\n end\n\n SequenceHandler --> SequenceHandlerArgs\n\n subgraph SequenceHandlerArgs[\" \"]\n direction TB\n WebSocket2023Storer(\"<strong>WebSocket2023Storer</strong><br>WebSocket2023Storer\")\n WebSocket2023StateHandler(\"<strong>WebSocket2023StateHandler</strong><br>BaseStateHandler\")\n end To detect and store WebSocket connections, the WebSocket2023Listener is added as a listener to the HTTP server. For all WebSocket connections that get opened, it verifies whether they correspond to an existing subscription. If yes, the information gets sent out to its stored WebSocket2023Handler.
In this case, this is a SequenceHandler, which contains a WebSocket2023Storer and a BaseStateHandler. The WebSocket2023Storer will store the WebSocket in the same map used by the WebSocket2023Emitter, so that class can emit events later on, as mentioned above. The state handler will make sure that a notification gets sent out if the subscription has a state feature request, as defined in the notification specification.
The additions required to support WebhookChannel2023 are quite similar to those needed for WebSocketChannel2023:
WebhookDescriber, which is an extension of a NotificationDescriber.WebhookChannel2023Type class contains all the necessary typing information.WebhookEmitter is the NotificationEmitter that sends the request.WebhookUnsubscriber and WebhookWebId are additional utility classes to support the spec requirements.A large part of every response of the JSON API is the controls block. These are generated by using nested ControlHandler objects. These take as input a key/value with the values being either routes or other interaction handlers. These will then be executed to determine the values of the output JSON object, with the same keys. By using other ControlHandlers in the input map, we can create nested objects.
The default structure of these handlers is as follows:
flowchart LR\n RootControlHandler(\"<strong>RootControlHandler</strong><br>ControlHandler\")\n RootControlHandler --controls--> ControlHandler(\"<strong>ControlHandler</strong><br>ControlHandler\")\n ControlHandler --main--> MainControlHandler(\"<strong>MainControlHandler</strong><br>ControlHandler\")\n ControlHandler --account--> AccountControlHandler(\"<strong>AccountControlHandler</strong><br>ControlHandler\")\n ControlHandler --password--> PasswordControlHandler(\"<strong>PasswordControlHandler</strong><br>ControlHandler\")\n ControlHandler --\"oidc\"--> OidcControlHandler(\"<strong>OidcControlHandler</strong><br>OidcControlHandler\")\n ControlHandler --html--> HtmlControlHandler(\"<strong>HtmlControlHandler</strong><br>ControlHandler\")\n\n HtmlControlHandler --main--> MainHtmlControlHandler(\"<strong>MainHtmlControlHandler</strong><br>ControlHandler\")\n HtmlControlHandler --account--> AccountHtmlControlHandler(\"<strong>AccountHtmlControlHandler</strong><br>ControlHandler\")\n HtmlControlHandler --password--> PasswordHtmlControlHandler(\"<strong>PasswordHtmlControlHandler</strong><br>ControlHandler\") Each of these control handlers then has a map of routes which link to the actual API endpoints. How to add these can be seen here.
"},{"location":"architecture/features/accounts/overview/","title":"Account management","text":"The main entry point is the IdentityProviderHandler, which routes all requests targeting a resource starting with /.account/ into this handler, after which it goes through similar parsing handlers as described here, the flow of which is shown below:
flowchart LR\n Handler(\"<strong>IdentityProviderHandler</strong><br>RouterHandler\")\n ParsingHandler(\"<strong>IdentityProviderParsingHandler</strong><br>AuthorizingHttpHandler\")\n AuthorizingHandler(\"<strong>IdentityProviderAuthorizingHandler</strong><br>AuthorizingHttpHandler\")\n\n Handler --> ParsingHandler\n ParsingHandler --> AuthorizingHandler\n AuthorizingHandler --> HttpHandler(\"<strong>IdentityProviderHttpHandler</strong><br>IdentityProviderHttpHandler\") The IdentityProviderHttpHandler is where the actual differentiation of this component starts. It handles identifying the account based on the supplied cookie and determining the active OIDC interaction, after which it calls an InteractionHandler with this additional input. The InteractionHandler is many handlers chained together as follows:
flowchart TD\n HttpHandler(\"<strong>IdentityProviderHttpHandler</strong><br>IdentityProviderHttpHandler\")\n HttpHandler --> InteractionHandler(\"<strong>InteractionHandler</strong><br>WaterfallHandler\")\n InteractionHandler --> InteractionHandlerArgs\n\n subgraph InteractionHandlerArgs[\" \"]\n HtmlViewHandler(\"<strong>HtmlViewHandler</strong><br>HtmlViewHandler\")\n LockingInteractionHandler(\"<strong>LockingInteractionHandler</strong><br>LockingInteractionHandler\")\n end\n\n LockingInteractionHandler --> JsonConversionHandler(\"<strong>JsonConversionHandler</strong><br>JsonConversionHandler\")\n JsonConversionHandler --> VersionHandler(\"<strong>VersionHandler</strong><br>VersionHandler\")\n VersionHandler --> CookieInteractionHandler(\"<strong>CookieInteractionHandler</strong><br>CookieInteractionHandler\")\n CookieInteractionHandler --> RootControlHandler(\"<strong>RootControlHandler</strong><br>ControlHandler\")\n RootControlHandler --> LocationInteractionHandler(\"<strong>LocationInteractionHandler</strong><br>LocationInteractionHandler\")\n LocationInteractionHandler --> InteractionRouteHandler(\"<strong>InteractionRouteHandler</strong><br>WaterfallHandler\") The HtmlViewHandler catches all request that request an HTML output. This class keeps a list of HTML pages and their corresponding URL and returns them when needed.
If the request is for the JSON API, the request goes through a chain of handlers, each responsible for a specific step in the API process. We'll list and summarize these here:
LockingInteractionHandler: In case the request is authenticated, this requests a lock on that account to prevent simultaneous operations on the same account.JsonConversionHandler: Converts the streaming input into a JSON object.VersionHandler: Adds a version number to all output.CookieInteractionHandler: Refreshes the cookie if necessary and adds relevant cookie metadata to the output.RootControlHandler: Responsible for adding all the controls to the output. Will take as input multiple other control handlers which create the nested values in the controls field.LocationInteractionHandler: Catches redirect errors and converts them to JSON objects with a location field.InteractionRouteHandler: A WaterfallHandler containing an entry for every supported API route.All entries contained in the urn:solid-server:default:InteractionRouteHandler have a similar structure: an InteractionRouteHandler, or AuthorizedRouteHandler for authenticated requests, which checks if the request targets a specific URL and redirects the request to its source if there is a match. Its source is quite often a ViewInteractionHandler, which returns a specific view on GET requests and performs an operation on POST requests, but other handlers can also occur.
Below we will give an example of one API route and all the components that are necessary to add it to the server.
"},{"location":"architecture/features/accounts/routes/#route-handler","title":"Route handler","text":"{\n \"@id\": \"urn:solid-server:default:AccountWebIdRouter\",\n \"@type\": \"AuthorizedRouteHandler\",\n \"route\": {\n \"@id\": \"urn:solid-server:default:AccountWebIdRoute\",\n \"@type\": \"RelativePathInteractionRoute\",\n \"base\": { \"@id\": \"urn:solid-server:default:AccountIdRoute\" },\n \"relativePath\": \"webid/\"\n },\n \"source\": { \"@id\": \"urn:solid-server:default:WebIdHandler\" }\n}\n The main entry point is the route handler, which determines the URL necessary to reach this API. In this case we create a new route, relative to the urn:solid-server:default:AccountIdRoute. That route specifically matches URLs of the format http://localhost:3000/.account/account/<accountId>/. Here we create a route relative to that one by appending webid, so the resulting route would match http://localhost:3000/.account/account/<accountId>/webid/. Since an AuthorizedRouteHandler is used here, the request also needs to be authenticated using an account cookie. If there is match, the request will be sent to the urn:solid-server:default:WebIdHandler.
{\n \"@id\": \"urn:solid-server:default:WebIdHandler\",\n \"@type\": \"ViewInteractionHandler\",\n \"source\": {\n \"@id\": \"urn:solid-server:default:LinkWebIdHandler\",\n \"@type\": \"LinkWebIdHandler\",\n \"baseUrl\": { \"@id\": \"urn:solid-server:default:variable:baseUrl\" },\n \"ownershipValidator\": { \"@id\": \"urn:solid-server:default:OwnershipValidator\" },\n \"accountStore\": { \"@id\": \"urn:solid-server:default:AccountStore\" },\n \"webIdStore\": { \"@id\": \"urn:solid-server:default:WebIdStore\" },\n \"identifierStrategy\": { \"@id\": \"urn:solid-server:default:IdentifierStrategy\" }\n }\n}\n The interaction handler is the class that performs the necessary operation based on the request. Often these are wrapped in a ViewInteractionHandler, which allows classes to have different support for GET and POST requests.
{\n \"@id\": \"urn:solid-server:default:InteractionRouteHandler\",\n \"@type\": \"WaterfallHandler\",\n \"handlers\": [\n { \"@id\": \"urn:solid-server:default:AccountWebIdRouter\" }\n ]\n}\n To make sure the API can be accessed, it needs to be added to the list of urn:solid-server:default:InteractionRouteHandler. This is the main handler that contains entries for all the APIs. This block of Components.js adds the route handler defined above to that list.
{\n \"@id\": \"urn:solid-server:default:AccountControlHandler\",\n \"@type\": \"ControlHandler\",\n \"controls\": [{\n \"ControlHandler:_controls_key\": \"webId\",\n \"ControlHandler:_controls_value\": { \"@id\": \"urn:solid-server:default:AccountWebIdRoute\" }\n }]\n}\n To make sure people can find the API, it is necessary to link it through the associated controls object. This API is related to account management, so we add its route in the account controls with the key webId. More information about controls can be found here.
{\n \"@id\": \"urn:solid-server:default:HtmlViewHandler\",\n \"@type\": \"HtmlViewHandler\",\n \"templates\": [{\n \"@id\": \"urn:solid-server:default:LinkWebIdHtml\",\n \"@type\": \"HtmlViewEntry\",\n \"filePath\": \"@css:templates/identity/account/link-webid.html.ejs\",\n \"route\": { \"@id\": \"urn:solid-server:default:AccountWebIdRoute\" }\n }]\n}\n Some API routes also have an associated HTML page, in which case the page needs to be added to the urn:solid-server:default:HtmlViewHandler, which is what we do here. Usually you will also want to add HTML controls so the page can be found.
{\n \"@id\": \"urn:solid-server:default:AccountHtmlControlHandler\",\n \"@type\": \"ControlHandler\",\n \"controls\": [{\n \"ControlHandler:_controls_key\": \"linkWebId\",\n \"ControlHandler:_controls_value\": { \"@id\": \"urn:solid-server:default:AccountWebIdRoute\" }\n }]\n}\n"},{"location":"architecture/features/protocol/authorization/","title":"Authorization","text":"flowchart TD\n AuthorizingHttpHandler(\"<br>AuthorizingHttpHandler\")\n AuthorizingHttpHandler --> AuthorizingHttpHandlerArgs\n\n subgraph AuthorizingHttpHandlerArgs[\" \"]\n CredentialsExtractor(\"<strong>CredentialsExtractor</strong><br><i>CredentialsExtractor</i>\")\n ModesExtractor(\"<strong>ModesExtractor</strong><br><i>ModesExtractor</i>\")\n PermissionReader(\"<strong>PermissionReader</strong><br><i>PermissionReader</i>\")\n Authorizer(\"<strong>Authorizer</strong><br>PermissionBasedAuthorizer\")\n OperationHttpHandler(\"<br><i>OperationHttpHandler</i>\")\n end Authorization is usually handled by the AuthorizingHttpHandler, which receives a parsed HTTP request in the form of an Operation. It goes through the following steps:
CredentialsExtractor identifies the credentials of the agent making the call.ModesExtractor finds which access modes are needed for which resources.PermissionReader determines the permissions the agent has on the targeted resources.Authorizer.OperationHttpHandler, otherwise throw an error.There are multiple CredentialsExtractors that each determine identity in a different way. Potentially multiple extractors can apply, making a requesting agent have multiple credentials.
The diagram below shows the default configuration if authentication is enabled.
flowchart TD\n CredentialsExtractor(\"<strong>CredentialsExtractor</strong><br>UnionCredentialsExtractor\")\n CredentialsExtractor --> CredentialsExtractorArgs\n\n subgraph CredentialsExtractorArgs[\" \"]\n WaterfallHandler(\"<br>WaterfallHandler\")\n PublicCredentialsExtractor(\"<br>PublicCredentialsExtractor\")\n end\n\n WaterfallHandler --> WaterfallHandlerArgs\n subgraph WaterfallHandlerArgs[\" \"]\n direction LR\n DPoPWebIdExtractor(\"<br>DPoPWebIdExtractor\") --> BearerWebIdExtractor(\"<br>BearerWebIdExtractor\")\n end Both of the WebID extractors make use of the access-token-verifier library to parse incoming tokens based on the Solid-OIDC specification. All these credentials then get combined into a single union object.
If successful, a CredentialsExtractor will return an object containing all the information extracted, such as the WebID of the agent, or the issuer of the token.
There are also debug configuration options available that can be used to simulate credentials. These can be enabled as different options through the config/ldp/authentication imports.
Access modes are a predefined list of read, write, append, create and delete. The ModesExtractor determine which modes will be necessary and for which resources, based on the request contents.
flowchart TD\n ModesExtractor(\"<strong>ModesExtractor</strong><br>IntermediateCreateExtractor\")\n ModesExtractor --> HttpModesExtractor(\"<strong>HttpModesExtractor</strong><br>WaterfallHandler\")\n\n HttpModesExtractor --> HttpModesExtractorArgs\n\n subgraph HttpModesExtractorArgs[\" \"]\n direction LR\n PatchModesExtractor(\"<strong>PatchModesExtractor</strong><br><i>ModesExtractor</i>\") --> MethodModesExtractor(\"<br>MethodModesExtractor\")\n end The IntermediateCreateExtractor is responsible if requests try to create intermediate containers with a single request. E.g., a PUT request to /foo/bar/baz should create both the /foo/ and /foo/bar/ containers in case they do not exist yet. This extractor makes sure that create permissions are also checked on those containers.
Modes can usually be determined based on just the HTTP methods, which is what the MethodModesExtractor does. A GET request will always need the read mode for example.
The only exception are PATCH requests, where the necessary modes depend on the body and the PATCH type.
flowchart TD\n PatchModesExtractor(\"<strong>PatchModesExtractor</strong><br>WaterfallHandler\") --> PatchModesExtractorArgs\n subgraph PatchModesExtractorArgs[\" \"]\n N3PatchModesExtractor(\"<br>N3PatchModesExtractor\")\n SparqlUpdateModesExtractor(\"<br>SparqlUpdateModesExtractor\")\n end The server supports both N3 Patch and SPARQL Update PATCH requests. In both cases it will parse the bodies to determine what the impact would be of the request and what modes it requires.
"},{"location":"architecture/features/protocol/authorization/#permission-reading","title":"Permission reading","text":"PermissionReaders take the input of the above to determine which permissions are available. The modes from the previous step are not yet needed, but can be used as optimization as we only need to know if we have permission on those modes. Each reader returns all the information it can find based on the resources and modes it receives. In most of the default configuration the following readers are combined when WebACL is enabled as authorization method. In case authorization is disabled by changing the authorization import to config/ldp/authorization/allow-all.json, the diagram would be a single class that always returns all permissions.
flowchart TD\n PermissionReader(\"<strong>PermissionReader</strong><br>AuxiliaryReader\")\n PermissionReader --> UnionPermissionReader(\"<br>UnionPermissionReader\")\n UnionPermissionReader --> UnionPermissionReaderArgs\n\n subgraph UnionPermissionReaderArgs[\" \"]\n PathBasedReader(\"<strong>PathBasedReader</strong><br>PathBasedReader\")\n OwnerPermissionReader(\"<strong>OwnerPermissionReader</strong><br>OwnerPermissionReader\")\n WrappedWebAclReader(\"<strong>WrappedWebAclReader</strong><br>ParentContainerReader\")\n end\n\n WrappedWebAclReader --> WebAclAuxiliaryReader(\"<strong>WebAclAuxiliaryReader</strong><br>AuthAuxiliaryReader\")\n WebAclAuxiliaryReader --> WebAclReader(\"<strong>WebAclReader</strong><br>WebAclReader\") The first thing that happens is that if the target is an auxiliary resource that uses the authorization of its subject resource, the AuxiliaryReader inserts that identifier instead. An example of this is if the requests targets the metadata of a resource.
The UnionPermissionReader then combines the results of its readers into a single permission object. If one reader rejects a specific mode and another allows it, the rejection takes priority.
The PathBasedReader rejects all permissions for certain paths. This is used to prevent access to the internal data of the server.
The OwnerPermissionReader makes sure owners always have control access to the pods they created on the server. Users will always be able to modify the ACL resources in their pod, even if they accidentally removed their own access.
The final readers are specifically relevant for the WebACL algorithm. The ParentContainerReader checks the permissions on a parent resource if required: creating a resource requires append permissions on the parent container, while deleting a resource requires write permissions there.
In case the target is an ACL resource, control permissions need to be checked, no matter what mode was generated by the ModesExtractor. The AuthAuxiliaryReader makes sure this conversion happens.
Finally, the WebAclReader implements the efffective ACL resource algorithm and returns the permissions it finds in that resource. In case no ACL resource is found this indicates a configuration error and no permissions will be granted.
It is also possible to use ACP as authorization method instead of WebACL. In that case the diagram is very similar, except the AuthAuxiliaryReader is configured for Access Control Resources, and it points to a AcpReader instead.
All the results of the previous steps then get combined in the PermissionBasedAuthorizer to either allow or reject a request. If no permissions are found for a requested mode, or they are explicitly forbidden, a 401/403 will be returned, depending on if the agent was logged in or not.
The LdpHandler, named as a reference to the Linked Data Platform specification, chains several handlers together, each with their own specific purpose, to fully resolve the HTTP request. It specifically handles Solid requests as described in the protocol specification, e.g. a POST request to create a new resource.
Below is a simplified view of how these handlers are linked.
flowchart LR\n LdpHandler(\"<strong>LdpHandler</strong><br>ParsingHttpHandler\")\n LdpHandler --> AuthorizingHttpHandler(\"<br>AuthorizingHttpHandler\")\n AuthorizingHttpHandler --> OperationHandler(\"<strong>OperationHandler</strong><br><i>OperationHandler</i>\")\n OperationHandler --> ResourceStore(\"<strong>ResourceStore</strong><br><i>ResourceStore</i>\") A standard request would go through the following steps:
ParsingHttphandler parses the HTTP request into a manageable format, both body and metadata such as headers.AuthorizingHttpHandler verifies if the request is authorized to access the targeted resource.OperationHandler determines which action is required based on the HTTP method.ResourceStore does all the relevant data work.ParsingHttphandler eventually receives the response data, or an error, and handles the output.Below are sections that go deeper into the specific steps.
ResourceStore looks likeflowchart TD\n ParsingHttphandler(\"<br>ParsingHttphandler\")\n ParsingHttphandler --> ParsingHttphandlerArgs\n\n subgraph ParsingHttphandlerArgs[\" \"]\n RequestParser(\"<strong>RequestParser</strong><br>BasicRequestParser\")\n AuthorizingHttpHandler(\"<strong></strong><br>AuthorizingHttpHandler\")\n ErrorHandler(\"<strong>ErrorHandler</strong><br><i>ErrorHandler</i>\")\n ResponseWriter(\"<strong>ResponseWriter</strong><br>BasicResponseWriter\")\n end A ParsingHttpHandler handles both the parsing of the input data, and the serializing of the output data. It follows these 3 steps:
RequestParser to convert the incoming data into an Operation.Operation to the AuthorizingHttpHandler to receive either a Representation if the operation was a success, or an Error in case something went wrong.ErrorHandler will convert the Error into a ResponseDescription.ResponseWriter to output the ResponseDescription as an HTTP response.flowchart TD\n RequestParser(\"<strong>RequestParser</strong><br>BasicRequestParser\") --> RequestParserArgs\n subgraph RequestParserArgs[\" \"]\n TargetExtractor(\"<strong>TargetExtractor</strong><br>OriginalUrlExtractor\")\n PreferenceParser(\"<strong>PreferenceParser</strong><br>AcceptPreferenceParser\")\n MetadataParser(\"<strong>MetadataParser</strong><br><i>MetadataParser</i>\")\n BodyParser(\"<br><i>Bodyparser</i>\")\n Conditions(\"<br>BasicConditionsParser\")\n end\n\n OriginalUrlExtractor --> IdentifierStrategy(\"<strong>IdentifierStrategy</strong><br><i>IdentifierStrategy</i>\") The BasicRequestParser is mostly an aggregator of multiple smaller parsers that each handle a very specific part.
This is a single class, the OriginalUrlExtractor, but fulfills the very important role of making sure input URLs are handled consistently.
The query parameters will always be completely removed from the URL.
There is also an algorithm to make sure all URLs have a \"canonical\" version as for example both & and %26 can be interpreted in the same way. Specifically all special characters will be encoded into their percent encoding.
The IdentifierStrategy it gets as input is used to determine if the resulting URL is within the scope of the server. This can differ depending on if the server uses subdomains or not.
The resulting identifier will be stored in the target field of an Operation object.
The AcceptPreferenceParser parses the Accept header and all the relevant Accept-* headers. These will all be put into the preferences field of an Operation object. These will later be used to handle the content negotiation.
For example, when sending an Accept: text/turtle; q=0.9 header, this wil result in the preferences object { type: { 'text/turtle': 0.9 } }.
Several other headers can have relevant metadata, such as the Content-Type header, or the Link: <http://www.w3.org/ns/ldp#Container>; rel=\"type\" header which is used to indicate to the server that a request intends to create a container.
Such headers are converted to RDF triples and stored in the RepresentationMetadata object, which will be part of the body field in the Operation.
The default MetadataParser is a ParallelHandler that contains several smaller parsers, each looking at a specific header.
In case of most requests, the input data stream is used directly in the body field of the Operation, with a few minor checks to make sure the HTTP specification is being followed.
In the case of PATCH requests though, there are several specific body parsers that will convert the request into a JavaScript object containing all the necessary information to execute such a PATCH. Several validation checks will already take place there as well.
"},{"location":"architecture/features/protocol/parsing/#conditions","title":"Conditions","text":"The BasicConditionsParser parses everything related to conditions headers, such as if-none-match or if-modified-since, and stores the relevant information in the conditions field of the Operation. These will later be used to make sure the request should be aborted or not.
In case a request is successful, the AuthorizingHttpHandler will return a ResponseDescription, and if not it will throw an error.
In case an error gets thrown, this will be caught by the ErrorHandler and converted into a ResponseDescription. The request preferences will be used to make sure the serialization is one that is preferred.
Either way we will have a ResponseDescription, which will be sent to the BasicResponseWriter to convert into output headers, data and a status code.
To convert the metadata into headers, it uses a MetadataWriter, which functions as the reverse of the MetadataParser mentioned above: it has multiple writers which each convert certain metadata into a specific header.
As described here, there is a generic solution for modifying resources as a result of PATCH requests. It consists of doing the following steps:
The architecture is described more in-depth below.
flowchart LR\n PatchingStore(\"<strong>ResourceStore_Patching</strong><br>ResourceStore\")\n PatchingStore --> PatchHandler(\"<strong>PatchHandler</strong><br>RepresentationPatchHandler\")\n PatchHandler --> Patchers(\"<br>WaterfallHandler\")\n Patchers --> ConvertingPatcher(\"<br>ConvertingPatcher\")\n ConvertingPatcher --> RdfPatcher(\"<strong>RdfPatcher</strong><br>RdfPatcher\") flowchart LR\n RdfPatcher(\"<strong>RdfPatcher</strong><br>RdfPatcher\")\n RdfPatcher --> RDFStore(\"<strong>PatchHandler_RDFStore</strong><br>WaterfallHandler\")\n RDFStore --> RDFStoreArgs\n\n subgraph RDFStoreArgs[\" \"]\n Immutable(\"<strong>PatchHandler_ImmutableMetadata</strong><br>ImmutableMetadataPatcher\")\n RDF(\"<strong>PatchHandler_RDF</strong><br>WaterfallHandler\")\n Immutable --> RDF\n end\n\n RDF --> RDFArgs\n\n subgraph RDFArgs[\" \"]\n direction LR\n N3(\"<br>N3Patcher\")\n SPARQL(\"<br>SparqlUpdatePatcher\")\n end The PatchingStore is the entry point. It first checks whether the next store supports modifying resources. Only if this is not the case will it start the generic patching solution by calling its PatchHandler.
The RepresentationPatchHandler calls the source ResourceStore to get a data stream representing the current state of the resource. It feeds that stream as input into a RepresentationPatcher, and then writes the result back to the store.
Similarly to the way accessing resources is done through a stack of ResourceStores, patching is done through a stack of RepresentationPatchers, each performing a step in the patching process.
The ConvertingPatcher is responsible for converting the original resource to a stream of quad objects, and converting the modified result back to the original type. By converting to quads, all other relevant classes can act independently of the actual RDF serialization type. For similar reasons, the RdfPatcher converts the quad stream to an N3.js Store so the next patchers do not have to worry about handling stream data and have access to the entire resource in memory.
The ImmutableMetadataPatcher keeps track of a list of triples that cannot be modified in the metadata of a resource. For example, it is not possible to modify the metadata to indicate whether it is a storage root. The ImmutableMetadataPatcher tracks all these triples before and after a metadata resource is modified, and throws an error if one is modified. If the target resource is not metadata but a standard resource, this class will be skipped.
Finally, either the N3Patcher or the SparqlUpdatePatcher will be called, depending on the type of patch that is requested.
The interface of a ResourceStore is mostly a 1-to-1 mapping of the HTTP methods:
getRepresentationsetRepresentationaddResourcedeleteResourcemodifyResourceThe corresponding OperationHandler of the relevant method is responsible for calling the correct ResourceStore function.
In practice, the community server has multiple resource stores chained together, each handling a specific part of the request and then calling the next store in the chain. The default configurations come with the following stores:
MonitoringStoreIndexRepresentationStoreLockingResourceStorePatchingStoreRepresentationConvertingStoreDataAccessorBasedStoreThis chain can be seen in the configuration part in config/storage/middleware/default.json and all the entries in config/storage/backend.
This store emits the events that are necessary to emit notifications when resources change.
There are 4 different events that can be emitted:
this.emit('changed', identifier, activity): is emitted for every resource that was changed/effected by a call to the store. With activity being undefined or one of the available ActivityStream terms.this.emit(AS.Create, identifier): is emitted for every resource that was created by the call to the store.this.emit(AS.Update, identifier): is emitted for every resource that was updated by the call to the store.this.emit(AS.Delete, identifier): is emitted for every resource that was deleted by the call to the store.A changed event will always be emitted if a resource was changed. If the correct metadata was set by the source ResourceStore, an additional field will be sent along indicating the type of change, and an additional corresponding event will be emitted, depending on what the change is.
When doing a GET request on a container /container/, this container returns the contents of /container/index.html instead if HTML is the preferred response type. All these values are the defaults and can be configured for other resources and media types.
To prevent data corruption, the server locks resources when being targeted by a request. Locks are only released when an operation is completely finished, in the case of read operations this means the entire data stream is read, and in the case of write operations this happens when all relevant data is written. The default lock that is used is a readers-writer lock. This allows simultaneous read requests on the same resource, but only while no write request is in progress.
"},{"location":"architecture/features/protocol/resource-store/#patchingstore","title":"PatchingStore","text":"PATCH operations in Solid apply certain transformations on the target resource, which makes them more complicated than only reading or writing data since it involves both. The PatchingStore provides a generic solution for backends that do not implement the modifyResource function so new backends can be added more easily. In case the next store in the chain does not support PATCH, the PatchingStore will GET the data from the next store, apply the transformation on that data, and then PUT it back to the store.
This store handles everything related to content negotiation. In case the resulting data of a GET request does not match the preferences of a request, it will be converted here. Similarly, if incoming data does not match the type expected by the store, the SPARQL backend only accepts triples for example, that is also handled here
"},{"location":"architecture/features/protocol/resource-store/#dataaccessorbasedstore","title":"DataAccessorBasedStore","text":"Large parts of the requirements of the Solid protocol specification are resolved by the DataAccessorBasedStore: POST only working on containers, DELETE not working on non-empty containers, generating ldp:contains triples for containers, etc. Most of this behaviour is independent of how the data is stored which is why it can be generalized here. The store's name comes from the fact that it makes use of DataAccessors to handle the read/write of resources. A DataAccessor is a simple interface that only focuses on handling the data. It does not concern itself with any of the necessary Solid checks as it assumes those have already been made. This means that if a storage method needs to be supported, only a new DataAccessor needs to be made, after which it can be plugged into the rest of the server.
The community server is fully written in Typescript.
All changes should be done through pull requests.
We recommend first discussing a possible solution in the relevant issue to reduce the amount of changes that will be requested.
In case any of your changes are breaking, make sure you target the next major branch (versions/x.0.0) instead of the main branch. Breaking changes include: changing interface/class signatures, potentially breaking external custom configurations, and breaking how internal data is stored. In case of doubt you probably want to target the next major branch.
We make use of Conventional Commits.
Don't forget to update the release notes when adding new major features. Also update any relevant documentation in case this is needed.
When making changes to a pull request, we prefer to update the existing commits with a rebase instead of appending new commits, this way the PR can be rebased directly onto the target branch instead of needing to be squashed.
There are strict requirements from the linter and the test coverage before a PR is valid. These are configured to run automatically when trying to commit to git. Although there are no tests for it (yet), we strongly advice documenting with TSdoc.
If a list of entries is alphabetically sorted, such as index.ts, make sure it stays that way.
"},{"location":"contributing/release/","title":"Releasing a new major version","text":"This is only relevant if you are a developer with push access responsible for doing a new release.
Steps to follow:
main into versions/next-major.RELEASE_NOTES.md are correct.npm run release -- -r majorchore(release): Update configs to vx.0.0.package.json, and generates the new entries in CHANGELOG.md. Commits with chore(release): Release version vx.0.0 of the npm packagenpx commit-and-tag-version -r major --dry-run to preview the commands that will be run and the changes to CHANGELOG.md.postrelease script will now prompt you to manually edit the CHANGELOG.md.postrelease script will amend the release commit, create an annotated tag and push changes to origin.versions/next-major into main and push.npm publishnpm dist-tag add @solid/community-server@x.0.0 nextversions/x.0.0 branch to the next version.npm run release -- -r major --prerelease alphaversions/next-major into main.npm publish --tag next.npm run release -- -r minorversions/next-major into main.One potential issue for scripts and other applications is that it requires user interaction to log in and authenticate. The CSS offers an alternative solution for such cases by making use of Client Credentials. Once you have created an account as described in the Identity Provider section, users can request a token that apps can use to authenticate without user input.
All requests to the client credentials API currently require you to send along the email and password of that account to identify yourself. This is a temporary solution until the server has more advanced account management, after which this API will change.
Below is example code of how to make use of these tokens. It makes use of several utility functions from the Solid Authentication Client. Note that the code below uses top-level await, which not all JavaScript engines support, so this should all be contained in an async function.
A token can be created either on your account page, by default http://localhost:3000/.account/, or by calling the relevant API.
Below is an example of how to call the API to generate such a token.
The code below generates a token linked to your account and WebID. This only needs to be done once, afterwards this token can be used for all future requests.
// This assumes your server is started under http://localhost:3000/.\n// It also assumes you have already logged in and `cookie` contains a valid cookie header\n// as described in the API documentation.\nconst indexResponse = await fetch('http://localhost:3000/.account/', { headers: { cookie }});\nconst { controls } = await indexResponse.json();\nconst response = await fetch(controls.account.clientCredentials, {\n method: 'POST',\n headers: { cookie, 'content-type': 'application/json' },\n // The name field will be used when generating the ID of your token.\n // The WebID field determines which WebID you will identify as when using the token.\n // Only WebIDs linked to your account can be used.\n body: JSON.stringify({ name: 'my-token', webId: 'http://localhost:3000/my-pod/card#me' }),\n});\n\n// These are the identifier and secret of your token.\n// Store the secret somewhere safe as there is no way to request it again from the server!\n// The `resource` value can be used to delete the token at a later point in time.\nconst { id, secret, resource } = await response.json();\n In case something goes wrong the status code will be 400/500 and the response body will contain a description of the problem.
"},{"location":"usage/client-credentials/#requesting-an-access-token","title":"Requesting an Access token","text":"The ID and secret combination generated above can be used to request an Access Token from the server. This Access Token is only valid for a certain amount of time, after which a new one needs to be requested.
import { createDpopHeader, generateDpopKeyPair } from '@inrupt/solid-client-authn-core';\nimport fetch from 'node-fetch';\n\n// A key pair is needed for encryption.\n// This function from `solid-client-authn` generates such a pair for you.\nconst dpopKey = await generateDpopKeyPair();\n\n// These are the ID and secret generated in the previous step.\n// Both the ID and the secret need to be form-encoded.\nconst authString = `${encodeURIComponent(id)}:${encodeURIComponent(secret)}`;\n// This URL can be found by looking at the \"token_endpoint\" field at\n// http://localhost:3000/.well-known/openid-configuration\n// if your server is hosted at http://localhost:3000/.\nconst tokenUrl = 'http://localhost:3000/.oidc/token';\nconst response = await fetch(tokenUrl, {\n method: 'POST',\n headers: {\n // The header needs to be in base64 encoding.\n authorization: `Basic ${Buffer.from(authString).toString('base64')}`,\n 'content-type': 'application/x-www-form-urlencoded',\n dpop: await createDpopHeader(tokenUrl, 'POST', dpopKey),\n },\n body: 'grant_type=client_credentials&scope=webid',\n});\n\n// This is the Access token that will be used to do an authenticated request to the server.\n// The JSON also contains an \"expires_in\" field in seconds,\n// which you can use to know when you need request a new Access token.\nconst { access_token: accessToken } = await response.json();\n"},{"location":"usage/client-credentials/#using-the-access-token-to-make-an-authenticated-request","title":"Using the Access token to make an authenticated request","text":"Once you have an Access token, you can use it for authenticated requests until it expires.
import { buildAuthenticatedFetch } from '@inrupt/solid-client-authn-core';\nimport fetch from 'node-fetch';\n\n// The DPoP key needs to be the same key as the one used in the previous step.\n// The Access token is the one generated in the previous step.\nconst authFetch = await buildAuthenticatedFetch(fetch, accessToken, { dpopKey });\n// authFetch can now be used as a standard fetch function that will authenticate as your WebID.\n// This request will do a simple GET for example.\nconst response = await authFetch('http://localhost:3000/private');\n"},{"location":"usage/client-credentials/#other-token-actions","title":"Other token actions","text":"You can see all your existing tokens on your account page or by doing a GET request to the same API to create a new token. The details of a token can be seen by doing a GET request to the resource URL of the token.
A token can be deleted by doing a DELETE request to the resource URL of the token.
All of these actions require you to be logged in to the account.
"},{"location":"usage/dev-configuration/","title":"Configuring the CSS as a development server in another project","text":"It can be useful to use the CSS as local server to develop Solid applications against. As an alternative to using CLI arguments, or environment variables, the CSS can be configured in the package.json as follows:
{\n \"name\": \"test\",\n \"version\": \"0.0.0\",\n \"private\": \"true\",\n \"config\": {\n \"community-solid-server\": {\n \"port\": 3001,\n \"loggingLevel\": \"error\"\n }\n },\n \"scripts\": {\n \"dev:pod\": \"community-solid-server\"\n },\n \"devDependencies\": {\n \"@solid/community-server\": \"^7.0.0\"\n }\n}\n These parameters will then be used when the community-solid-server command is executed as an npm script (as shown in the example above). Or whenever the community-solid-server command is executed in the same folder as the package.json.
Alternatively, the configuration parameters may be placed in a configuration file named .community-solid-server.config.json as follows:
{\n \"port\": 3001,\n \"loggingLevel\": \"error\"\n}\n The config may also be written in JavaScript with the config as the default export such as the following .community-solid-server.config.js:
module.exports = {\n port: 3001,\n loggingLevel: 'error'\n};\n"},{"location":"usage/example-requests/","title":"Interacting with the server","text":""},{"location":"usage/example-requests/#put-creating-resources-for-a-given-url","title":"PUT: Creating resources for a given URL","text":"Create a plain text file:
curl -X PUT -H \"Content-Type: text/plain\" \\\n -d \"abc\" \\\n http://localhost:3000/myfile.txt\n Create a turtle file:
curl -X PUT -H \"Content-Type: text/turtle\" \\\n -d \"<ex:s> <ex:p> <ex:o>.\" \\\n http://localhost:3000/myfile.ttl\n"},{"location":"usage/example-requests/#post-creating-resources-at-a-generated-url","title":"POST: Creating resources at a generated URL","text":"Create a plain text file:
curl -X POST -H \"Content-Type: text/plain\" \\\n -d \"abc\" \\\n http://localhost:3000/\n Create a turtle file:
curl -X POST -H \"Content-Type: text/turtle\" \\\n -d \"<ex:s> <ex:p> <ex:o>.\" \\\n http://localhost:3000/\n The response's Location header will contain the URL of the created resource.
GET: Retrieving resources","text":"Retrieve a plain text file:
curl -H \"Accept: text/plain\" \\\n http://localhost:3000/myfile.txt\n Retrieve a turtle file:
curl -H \"Accept: text/turtle\" \\\n http://localhost:3000/myfile.ttl\n Retrieve a turtle file in a different serialization:
curl -H \"Accept: application/ld+json\" \\\n http://localhost:3000/myfile.ttl\n"},{"location":"usage/example-requests/#delete-deleting-resources","title":"DELETE: Deleting resources","text":"curl -X DELETE http://localhost:3000/myfile.txt\n"},{"location":"usage/example-requests/#patch-modifying-resources","title":"PATCH: Modifying resources","text":"Modify a resource using N3 Patch:
curl -X PATCH -H \"Content-Type: text/n3\" \\\n --data-raw \"@prefix solid: <http://www.w3.org/ns/solid/terms#>. _:rename a solid:InsertDeletePatch; solid:inserts { <ex:s2> <ex:p2> <ex:o2>. }.\" \\\n http://localhost:3000/myfile.ttl\n Modify a resource using SPARQL Update:
curl -X PATCH -H \"Content-Type: application/sparql-update\" \\\n -d \"INSERT DATA { <ex:s2> <ex:p2> <ex:o2> }\" \\\n http://localhost:3000/myfile.ttl\n"},{"location":"usage/example-requests/#head-retrieve-resources-headers","title":"HEAD: Retrieve resources headers","text":"curl -I -H \"Accept: text/plain\" \\\n http://localhost:3000/myfile.txt\n"},{"location":"usage/example-requests/#options-retrieve-resources-communication-options","title":"OPTIONS: Retrieve resources communication options","text":"curl -X OPTIONS -i http://localhost:3000/myfile.txt\n"},{"location":"usage/identity-provider/","title":"Identity Provider","text":"Besides implementing the Solid protocol, the community server can also be an Identity Provider (IDP), officially known as an OpenID Provider (OP), following the Solid-OIDC specification as much as possible.
It is recommended to use the latest version of the Solid authentication client to interact with the server.
It also provides account management options for creating pods and WebIDs to be used during authentication, which are discussed more in-depth below. The links on this page assume the server is hosted at http://localhost:3000/.
To register an account, you can go to http://localhost:3000/.account/password/register/, if this feature is enabled. There you can create an account with the email/password login method. The password will be salted and hashed before being stored. Afterwards you will be redirected to the account page where you can create pods and link WebIDs to your account.
To create a pod you simply have to fill in the name you want your pod to have. This will then be used to generate the full URL of your pod. For example, if you choose the name test, your pod would be located at http://localhost:3000/test/ and your generated WebID would be http://localhost:3000/test/profile/card#me.
If you fill in a WebID when creating the pod, that WebID will be the one that has access to all data in the pod. If you don't, a WebID will be created in the pod and immediately linked to your account, allowing you to use it for authentication and accessing the data in that pod
The generated name also depends on the configuration you chose for your server. If you are using the subdomain feature, the generated pod URL would be http://test.localhost:3000/.
To use Solid authentication, you need to link at least one WebID to your account. This can happen automatically when creating a pod as mentioned above, or can be done manually with external WebIDs.
If you try to link an external WebID, the first attempt will return an error indicating you need to add an identification triple to your WebID. After doing that you can try to register again. This is how we verify you are the owner of that WebID. Afterwards the page will inform you that you have to add a triple to your WebID if you want to use the server as your IDP.
"},{"location":"usage/identity-provider/#logging-in","title":"Logging in","text":"When using an authenticating client, you will be redirected to a login screen asking for your email and password. After that you will be redirected to a page showing some basic information about the client where you can pick the WebID you want to use. There you need to consent that this client is allowed to identify using that WebID. As a result the server will send a token back to the client that contains all the information needed to use your WebID.
"},{"location":"usage/identity-provider/#forgot-password","title":"Forgot password","text":"If you forgot your password, you can recover it by going to http://localhost:3000/.account/login/password/forgot/. There you can enter your email address to get a recovery mail to reset your password. This feature only works if a mail server was configured, which by default is not the case.
All of the above happens through HTML pages provided by the server. By default, the server uses the templates found in /templates/identity/ but different templates can be used through configuration.
These templates all make use of a JSON API exposed by the server. A full description of this API can be found here.
"},{"location":"usage/identity-provider/#idp-configuration","title":"IDP configuration","text":"The above descriptions cover server behaviour with most default configurations, but just like any other feature, there are several features that can be changed through the imports in your configuration file.
All available options can be found in the config/identity/ folder. Below we go a bit deeper into the available options
The access option allows you to set authorization restrictions on the IDP API when enabled, similar to how authorization works on the LDP requests on the server. For example, if the server uses WebACL as authorization scheme, you can put a .acl resource in the /.account/account/ container to restrict who is allowed to access the account creation API. Note that for everything to work there needs to be a .acl resource in /.account/ when using WebACL so resources can be accessed as usual when the server starts up. Make sure you change the permissions on /.account/.acl so not everyone can modify those.
All of the above is only relevant if you use the restricted.json setting for this import. When you use public.json the API will simply always be accessible by everyone.
In case you want users to be able to reset their password when they forget it, you will need to tell the server which email server to use to send reset mails. example.json contains an example of what this looks like. When using this import, you can override the values with those of your own mail client by adding the following to your Components.js configuration with updated values:
{\n \"comment\": \"The settings of your email server.\",\n \"@type\": \"Override\",\n \"overrideInstance\": {\n \"@id\": \"urn:solid-server:default:EmailSender\"\n },\n \"overrideParameters\": {\n \"@type\": \"BaseEmailSender\",\n \"senderName\": \"Community Solid Server <solid@example.email>\",\n \"emailConfig_host\": \"smtp.example.email\",\n \"emailConfig_port\": 587,\n \"emailConfig_auth_user\": \"solid@example.email\",\n \"emailConfig_auth_pass\": \"NYEaCsqV7aVStRCbmC\"\n }\n}\n"},{"location":"usage/identity-provider/#handler","title":"handler","text":"Here you determine which features of account management are available. default.json allows everything, disabled.json completely disables account management, and the other options disable account and/or pod creation.
The pod options determines how pods are created. static.json is the expected pod behaviour as described above. dynamic.json is an experimental feature that allows users to have a custom Components.js configuration for their own pod. When using such a configuration, a JSON file will be written containing all the information of the user pods, so they can be recreated when the server restarts.
Due to its modular nature, it is possible to add new login methods to the server, allowing users to log in different ways than just the standard email/password combination. More information on what is required can be found here.
"},{"location":"usage/identity-provider/#data-migration","title":"Data migration","text":"Going from v6 to v7 of the server, the account management is completely rewritten, including how account data is stored on the server. More information about how account data of an existing server can be migrated to the newer version can be found here.
"},{"location":"usage/metadata/","title":"Editing metadata of resources","text":""},{"location":"usage/metadata/#what-is-a-description-resource","title":"What is a description resource","text":"Description resources contain auxiliary information about a resource. In CSS, these represent metadata corresponding to that resource. Every resource always has a corresponding description resource and therefore description resources can not be created or deleted directly.
Description resources are discoverable by interacting with their subject resource: the response to a GET or HEAD request on a subject resource will contain a describedby Link Header with a URL that points to its description resource.
Clients should always follow this link rather than guessing its URL, because the Solid Protocol does not mandate a specific description resource URL. The default CSS configurations use as a convention that http://example.org/resource has http://example.org/resource.meta as its description resource.
Editing the metadata of a resource is performed by editing the description resource directly. This can only be done using PATCH requests (see example workflow).
PUT requests on description resources are not allowed, because they would replace the entire resource state, whereas some metadata is protected or generated by the server.
Similarly, DELETE on description resources is not allowed because a resource will always have some metadata (e.g. rdf:type). Instead, the lifecycle of description resources is managed by the server.
Some metadata is managed by the server and can not be modified directly, such as the last modified date. The CSS will throw an error (409 ConflictHttpError) when trying to change this protected metadata.
PUT requests on a resource will reset the description resource. There is however a way to keep the contents of description resource prior to the PUT request: adding the HTTP Link header targeting the description resource with rel=\"preserve\".
When the resource URL is http://localhost:3000/foobar, preserving its description resource when updating its contents can be achieved like in the following example:
curl -X PUT 'http://localhost:3000/foobar' \\\n-H 'Content-Type: text/turtle' \\\n-H 'Link: <http://localhost:3000/foobar.meta>;rel=\"preserve\"' \\\n-d \"<ex:s> <ex:p> <ex:o>.\"\n"},{"location":"usage/metadata/#impact-on-creating-containers","title":"Impact on creating containers","text":"When creating a container the input body is ignored and performing a PUT request on an existing container will result in an error. Container metadata can only be added and modified by performing a PATCH on the description resource, similarly to documents. This is done to clearly differentiate between a container's representation and its metadata.
In this example, we add an inbox description to http://localhost:3000/foo/. This allows discovery of the ldp:inbox as described in the Linked Data Notifications specification.
We have started the CSS with the default configuration and have already created an inbox at http://localhost:3000/inbox/.
Since we don't know the location of the description resource, we first send a HEAD request to the resource to obtain the URL of its description resource.
curl --head 'http://localhost:3000/foo/'\n which will produce a response with at least these headers:
HTTP/1.1 200 OK\nLink: <http://localhost:3000/foo/.meta>; rel=\"describedby\"\n Now that we have the URL of the description resource, we create a patch for adding the inbox in the description of the resource.
curl -X PATCH 'http://localhost:3000/foo/.meta' \\\n-H 'Content-Type: text/n3' \\\n--data-raw '@prefix solid: <http://www.w3.org/ns/solid/terms#>.\n<> a solid:InsertDeletePatch;\nsolid:inserts { <http://localhost:3000/foo/> <http://www.w3.org/ns/ldp#inbox> <http://localhost:3000/inbox/>. }.'\n After this update, we can verify that the inbox is added by performing a GET request to the description resource
curl 'http://localhost:3000/foo/.meta'\n With as result for the body
@prefix dc: <http://purl.org/dc/terms/>.\n@prefix ldp: <http://www.w3.org/ns/ldp#>.\n@prefix posix: <http://www.w3.org/ns/posix/stat#>.\n@prefix xsd: <http://www.w3.org/2001/XMLSchema#>.\n\n<http://localhost:3000/foo/> a ldp:Container, ldp:BasicContainer, ldp:Resource;\n dc:modified \"2022-06-09T08:17:07.000Z\"^^xsd:dateTime;\n ldp:inbox <http://localhost:3000/inbox/>;.\n This can also be verified by sending a GET request to the subject resource itself. The inbox location can also be found in the Link headers.
curl -v 'http://localhost:3000/foo/'\n HTTP/1.1 200 OK\nLink: <http://localhost:3000/inbox/>; rel=\"http://www.w3.org/ns/ldp#inbox\"\n"},{"location":"usage/notifications/","title":"Receiving notifications","text":"A CSS instance can be configured to support Solid notifications. These can be used to track changes on the server. There are no specific requirements on the type of notifications a Solid server should support, so on this page we'll describe the notification types supported by CSS, and how to make use of the different ways supported to receive notifications.
"},{"location":"usage/notifications/#discovering-subscription-services","title":"Discovering subscription services","text":"CSS only supports discovering the notification subscription services through the storage description resource. This can be found by doing a HEAD request on any resource in your pod and looking for the Link header with the http://www.w3.org/ns/solid/terms#storageDescription relationship.
For example, when hosting the server on localhost with port 3000, the result is:
Link: <http://localhost:3000/.well-known/solid>; rel=\"http://www.w3.org/ns/solid/terms#storageDescription\"\n Doing a GET to http://localhost:3000/.well-known/solid then gives the following result (simplified for readability):
@prefix notify: <http://www.w3.org/ns/solid/notifications#>.\n\n<http://localhost:3000/.well-known/solid>\n a <http://www.w3.org/ns/pim/space#Storage> ;\n notify:subscription <http://localhost:3000/.notifications/WebSocketChannel2023/> ,\n <http://localhost:3000/.notifications/WebhookChannel2023/> .\n<http://localhost:3000/.notifications/WebSocketChannel2023/>\n notify:channelType notify:WebSocketChannel2023 ;\n notify:feature notify:accept ,\n notify:endAt ,\n notify:rate ,\n notify:startAt ,\n notify:state .\n<http://localhost:3000/.notifications/WebhookChannel2023/>\n notify:channelType notify:WebhookChannel2023;\n notify:feature notify:accept ,\n notify:endAt ,\n notify:rate ,\n notify:startAt ,\n notify:state .\n This says that there are two available subscription services that can be used for notifications and where to find them. Note that these discovery requests also support content-negotiation, so you could ask for JSON-LD if you prefer. Currently, however, this JSON-LD will not match the examples from the notification specification.
The above tells us where to send subscriptions and which features are supported for those services. You subscribe to a channel by POSTing a JSON-LD document to the subscription services. There are some small differences in the structure of these documents, depending on the channel type, which will be discussed below.
Subscription requests need to be authenticated using Solid-OIDC. The server will check whether you have Read permission on the resource you want to listen to. Requests without Read permission will be rejected.
There are currently up to two supported ways to get notifications in CSS, depending on your configuration: the notification channel types WebSocketChannel2023; and WebhookChannel2023.
To subscribe to the http://localhost:3000/foo resource using WebSockets, you use an authenticated POST request to send the following JSON-LD document to the server, at http://localhost:3000/.notifications/WebSocketChannel2023/:
{\n \"@context\": [ \"https://www.w3.org/ns/solid/notification/v1\" ],\n \"type\": \"http://www.w3.org/ns/solid/notifications#WebSocketChannel2023\",\n \"topic\": \"http://localhost:3000/foo\"\n}\n If you have Read permissions, the server's reply will look like this:
{\n \"@context\": [ \"https://www.w3.org/ns/solid/notification/v1\" ],\n \"id\": \"http://localhost:3000/.notifications/WebSocketChannel2023/dea6f614-08ab-4cc1-bbca-5dece0afb1e2\",\n \"type\": \"http://www.w3.org/ns/solid/notifications#WebSocketChannel2023\",\n \"topic\": \"http://localhost:3000/foo\",\n \"receiveFrom\": \"ws://localhost:3000/.notifications/WebSocketChannel2023/?auth=http%3A%2F%2Flocalhost%3A3000%2F.notifications%2FWebSocketChannel2023%2Fdea6f614-08ab-4cc1-bbca-5dece0afb1e2\"\n}\n The most important field is receiveFrom. This field tells you the WebSocket to which you need to connect, through which you will start receiving notifications. In JavaScript, this can be done using the WebSocket object, such as:
const ws = new WebSocket(receiveFrom);\nws.on('message', (notification) => console.log(notification));\n"},{"location":"usage/notifications/#webhooks","title":"Webhooks","text":"Similar to the WebSocket subscription, below is sample JSON-LD that would be sent to http://localhost:3000/.notifications/WebhookChannel2023/:
{\n \"@context\": [ \"https://www.w3.org/ns/solid/notification/v1\" ],\n \"type\": \"http://www.w3.org/ns/solid/notifications#WebhookChannel2023\",\n \"topic\": \"http://localhost:3000/foo\",\n \"sendTo\": \"https://example.com/webhook\"\n}\n Note that this document has an additional sendTo field. This is the Webhook URL of your server, the URL to which you want the notifications to be sent.
The response would then be something like this:
{\n \"@context\": [ \"https://www.w3.org/ns/solid/notification/v1\" ],\n \"id\": \"http://localhost:3000/.notifications/WebhookChannel2023/eeaf2c17-699a-4e53-8355-e91d13807e5f\",\n \"type\": \"http://www.w3.org/ns/solid/notifications#WebhookChannel2023\",\n \"topic\": \"http://localhost:3000/foo\",\n \"sendTo\": \"https://example.com/webhook\"\n}\n"},{"location":"usage/notifications/#unsubscribing-from-a-notification-channel","title":"Unsubscribing from a notification channel","text":"Note
This feature is not part of the Solid Notification v0.2 specification so might be changed or removed in the future.
If you no longer want to receive notifications on the channel you created, you can send a DELETE request to the channel to remove it. Use the value found in the id field of the subscription response. There is no way to retrieve this identifier later on, so make sure to keep track of it just in case you want to unsubscribe at some point. No authorization is needed for this request.
Below is an example notification that would be sent when a resource changes:
{\n \"@context\": [\n \"https://www.w3.org/ns/activitystreams\",\n \"https://www.w3.org/ns/solid/notification/v1\"\n ],\n \"id\": \"urn:123456:http://example.com/foo\",\n \"type\": \"Update\",\n \"object\": \"http://localhost:3000/foo\",\n \"state\": \"987654\",\n \"published\": \"2023-02-09T15:08:12.345Z\"\n}\n A notification contains the following fields:
id: A unique identifier for this notification.type: What happened to trigger the notification. We discuss the possible values below.object: The resource that changed.state: An identifier indicating the state of the resource. This corresponds to the ETag value you get when doing a request on the resource itself.published: When this change occurred.CSS supports five different notification types that the client can receive. The format of the notification can slightly change depending on the type.
Resource notification types:
Create: When the resource is created.Update: When the existing resource is changed.Delete: When the resource is deleted. Does not have a state field.Additionally, when listening to a container, there are two extra notifications that are sent out when the contents of the container change. For these notifications, the object fields references the resource that was added or removed, while the new target field references the container itself.
Add: When a new resource is added to the container.Remove: When a resource is removed from the container.The Solid notification specification describes several extra features that can be supported by notification channels. By default, these are all supported on the channels of a CSS instance, as can be seen in the descriptions returned by the server above. Each feature can be enabled by adding a field to the JSON-LD you send during subscription. The available fields are:
startAt: An xsd:dateTime describing when you want notifications to start. No notifications will be sent on this channel before this time.endAt: An xsd:dateTime describing when you want notifications to stop. The channel will be destroyed at that time, and no more notifications will be sent.state: A string corresponding to the state string of a resource notification. If this value differs from the actual state of the resource, a notification will be sent out immediately to inform the client that its stored state is outdated.rate: An xsd:duration indicating how often notifications can be sent out. A new notification will only be sent out after this much time has passed since the previous notification.accept: A description of the content-type(s) in which the client would want to receive the notifications. Expects the same values as an Accept HTTP header.There is not much restriction on who can create a new notification channel; only Read permissions on the target resource are required. It is therefore possible for the server to accumulate created channels. As these channels still get used every time their corresponding resource changes, this could degrade server performance.
For this reason, the default server configuration removes notification channels after two weeks (20160 minutes). You can modify this behaviour by adding the following block to your configuration:
{\n \"@id\": \"urn:solid-server:default:WebSocket2023Subscriber\",\n \"@type\": \"NotificationSubscriber\",\n \"maxDuration\": 20160\n}\n maxDuration defines after how many minutes every channel will be removed. Setting this value to 0 will allow channels to exist forever. Similarly, to change the maximum duration of webhook channels you can use the identifier urn:solid-server:default:WebhookSubscriber.
If you need to seed accounts and pods, the --seedConfig command line option can be used with as value the path to a JSON file containing configurations for every required pod. The file needs to contain an array of JSON objects, with each object containing at least an email, and password field. Multiple pod objects can also be assigned to such an object in the pods array to create pods for the account, with contents being the same as its corresponding JSON API.
For example:
[\n {\n \"email\": \"hello@example.com\",\n \"password\": \"abc123\"\n },\n {\n \"email\": \"hello2@example.com\",\n \"password\": \"123abc\",\n \"pods\": [\n { \"name\": \"pod1\" },\n { \"name\": \"pod2\" }\n ]\n }\n]\n This feature cannot be used to register pods with pre-existing WebIDs, which requires an interactive validation step, unless you disable the WebID ownership check in your server configuration.
Note that pod seeding is made for a default server setup with standard email/password login. If you add a new login method you will need to create a new implementation of pod seeding if you want to use it.
"},{"location":"usage/starting-server/","title":"Starting the Community Solid Server","text":""},{"location":"usage/starting-server/#quickly-spinning-up-a-server","title":"Quickly spinning up a server","text":"Use Node.js\u00a018.0 or up and execute:
npx @solid/community-server\n Now visit your brand new server at http://localhost:3000/!
To persist your pod's contents between restarts, use:
npx @solid/community-server -c @css:config/file.json -f data/\n"},{"location":"usage/starting-server/#local-installation","title":"Local installation","text":"Install the npm package globally with:
npm install -g @solid/community-server\n To run the server with in-memory storage, use:
community-solid-server # add parameters if needed\n To run the server with your current folder as storage, use:
community-solid-server -c @css:config/file.json -f data/\n"},{"location":"usage/starting-server/#configuring-the-server","title":"Configuring the server","text":"The Community Solid Server is designed to be flexible such that people can easily run different configurations. This is useful for customizing the server with plugins, testing applications in different setups, or developing new parts for the server without needing to change its base code.
An easy way to customize the server is by passing parameters to the server command. These parameters give you direct access to some commonly used settings:
parameter name default value description--port, -p 3000 The TCP port on which the server should listen. --baseUrl, -b http://localhost:$PORT/ The base URL used internally to generate URLs. Change this if your server does not run on http://localhost:$PORT/. --socket The Unix Domain Socket on which the server should listen. --baseUrl must be set if this option is provided --loggingLevel, -l info The detail level of logging; useful for debugging problems. Use debug for full information. --config, -c @css:config/default.json The configuration(s) for the server. The default only stores data in memory; to persist to your filesystem, use @css:config/file.json --rootFilePath, -f ./ Root folder where the server stores data, when using a file-based configuration. --sparqlEndpoint, -s URL of the SPARQL endpoint, when using a quadstore-based configuration. --showStackTrace, -t false Enables detailed logging on error output. --podConfigJson ./pod-config.json Path to the file that keeps track of dynamic Pod configurations. Only relevant when using @css:config/dynamic.json. --seedConfig Path to the file that keeps track of seeded account configurations. --mainModulePath, -m Path from where Components.js will start its lookup when initializing configurations. --workers, -w 1 Run in multithreaded mode using workers. Special values are -1 (scale to num_cores-1), 0 (scale to num_cores) and 1 (singlethreaded). Parameters can also be passed through environment variables.
They are prefixed with CSS_ and converted from camelCase to CAMEL_CASE
eg. --showStackTrace => CSS_SHOW_STACK_TRACE
Command-line arguments will always override environment variables.
"},{"location":"usage/starting-server/#alternative-ways-to-run-the-server","title":"Alternative ways to run the server","text":""},{"location":"usage/starting-server/#from-source","title":"From source","text":"If you rather prefer to run the latest source code version, or if you want to try a specific branch of the code, you can use:
git clone https://github.com/CommunitySolidServer/CommunitySolidServer.git\ncd CommunitySolidServer\nnpm ci\nnpm start -- # add parameters if needed\n"},{"location":"usage/starting-server/#via-docker","title":"Via Docker","text":"Docker allows you to run the server without having Node.js installed. Images are built on each tagged version and hosted on Docker Hub.
# Clone the repo to get access to the configs\ngit clone https://github.com/CommunitySolidServer/CommunitySolidServer.git\ncd CommunitySolidServer\n# Run the image, serving your `~/Solid` directory on `http://localhost:3000`\ndocker run --rm -v ~/Solid:/data -p 3000:3000 -it solidproject/community-server:latest\n# Or use one of the built-in configurations\ndocker run --rm -p 3000:3000 -it solidproject/community-server -c config/default.json\n# Or use your own configuration mapped to the right directory\ndocker run --rm -v ~/solid-config:/config -p 3000:3000 -it solidproject/community-server -c /config/my-config.json\n# Or use environment variables to configure your css instance\ndocker run --rm -v ~/Solid:/data -p 3000:3000 -it -e CSS_CONFIG=config/file-no-setup.json -e CSS_LOGGING_LEVEL=debug solidproject/community-server\n"},{"location":"usage/starting-server/#using-a-helm-chart","title":"Using a Helm Chart","text":"The official Helm Chart for Kubernetes deployment is maintained at CommunitySolidServer/css-helm-chart and published on ArtifactHUB. There you will find complete installation instructions.
# Summary\nhelm repo add community-solid-server https://communitysolidserver.github.io/css-helm-chart/charts/\nhelm install my-css community-solid-server/community-solid-server\n"},{"location":"usage/account/json-api/","title":"Account management JSON API","text":"Everything related to account management is done through a JSON API, of which we will describe all paths below. There are also HTML pages available to handle account management that use these APIs internally. Links to these can be found in the HTML controls All APIs expect JSON as input, and will return JSON objects as output.
"},{"location":"usage/account/json-api/#finding-api-urls","title":"Finding API URLs","text":"All URLs below are relative to the index account API URL, which by default is http://localhost:3000/.account/. Every response of an API request will contain a controls object, containing all the URLs of the other API endpoints. It is generally advised to make use of these controls instead of hardcoding the URLs. Only the initial index URL needs to be known then to find the controls. Certain controls will be missing if those features are disabled in the configuration.
Many APIs require a POST request to perform an action. When doing a GET request on these APIs they will return an object describing what input is expected for the POST.
"},{"location":"usage/account/json-api/#authorization","title":"Authorization","text":"After logging in, the API will return a set-cookie header of the format css-account=$VALUE This cookie is necessary to have access to many of the APIs. When including this cookie, the controls object will also be extended with new URLs that are now accessible. When logging in, the response body JSON body will also contain an authorization field containing the $VALUE value mentioned above. Instead of using cookies, this value can be used in an Authorization header with value CSS-Account-Token $VALUE to achieve the same result.
The expiration time of this cookie will be refreshed every time there is a successful request to the server with that cookie.
"},{"location":"usage/account/json-api/#redirecting","title":"Redirecting","text":"As redirects through status codes 3xx can make working with JSON APIs more difficult, the API will never make use of this. Instead, if a redirect is required after an action, the response JSON object will return a location field. This is the next URL that should be fetched. This is mostly relevant in OIDC interactions as these cause the interaction to progress.
Below is an overview of all the keys in a controls object returned by the server, with all features enabled. An example of what such an object looks like can be found at the bottom of the page.
"},{"location":"usage/account/json-api/#controlsmain","title":"controls.main","text":"General controls that require no authentication.
"},{"location":"usage/account/json-api/#controlsmainindex","title":"controls.main.index","text":"General entrypoint to the API. Returns an empty object, including the controls, on all GET requests.
"},{"location":"usage/account/json-api/#controlsmainlogins","title":"controls.main.logins","text":"Returns an overview of all login systems available on the server in logins object. Keys are a string description of the login system and values are links to their login pages. This can be used to let users choose how they want to log in. By default, the object only contains the email/password login system.
All controls related to account management. All of these require authorization, except for the create action.
"},{"location":"usage/account/json-api/#controlsaccountcreate","title":"controls.account.create","text":"Creates a new account on empty POST requests. The response contains the necessary cookie values to log in. This account can not be used until a login method has been added to it. All other interactions will fail until this is the case. See the controls.password.create section below for more information on how to do this. This account will expire after some time if no login method is added.
"},{"location":"usage/account/json-api/#controlsaccountlogout","title":"controls.account.logout","text":"Logs the account out on an empty POST request. Invalidates the cookie that was used.
"},{"location":"usage/account/json-api/#controlsaccountwebid","title":"controls.account.webId","text":"GET requests return all WebIDs linked to this account in the following format:
{\n \"webIdLinks\": {\n \"http://localhost:3000/test/profile/card#me\": \"http://localhost:3000/.account/account/c63c9e6f-48f8-40d0-8fec-238da893a7f2/webid/fdfc48c1-fe6f-4ce7-9e9f-1dc47eff803d/\"\n }\n}\n The URL value is the resource URL corresponding to the link with this WebID. The link can be removed by sending a DELETE request to that URL.
POST requests link a WebID to the account, allowing the account to identify as that WebID during an OIDC authentication interaction. Expected input is an object containing a webId field. The response will include the resource URL.
If the chosen WebID is contained within a Solid pod created by this account, the request will succeed immediately. If not, an error will be thrown, asking the user to add a specific triple to the WebID to confirm that they are the owner. After this triple is added, a second request will be successful.
"},{"location":"usage/account/json-api/#controlsaccountpod","title":"controls.account.pod","text":"GET requests return all pods created by this account in the following format:
{\n \"pods\": {\n \"http://localhost:3000/test/\": \"http://localhost:3000/.account/account/c63c9e6f-48f8-40d0-8fec-238da893a7f2/pod/df2d5a06-3ecd-4eaf-ac8f-b88a8579e100/\"\n }\n}\n The URL value is the resource URL corresponding to the link with this WebID. Doing a GET request to this resource will return the base URl of the pod, and all its owners of a pod, as shown below. You can send a POST request to this resource with a webId and visible: boolean field to add/update an owner and set its visibility. Visibility determines whether the owner is exposed through a link header when requesting the pod. You can also send a POST request to this resource with a webId and remove: true field to remove the owner.
{\n \"baseUrl\": \"http://localhost:3000/my-pod/\",\n \"owners\": [\n {\n \"webId\": \"http://localhost:3000/my-pod/profile/card#me\",\n \"visible\": false\n }\n ]\n}\n POST requests to controls.account.pod create a Solid pod for the account. The only required field is name, which will determine the name of the pod.
Additionally, a settings object can be sent along, the values of which will be sent to the templates used when generating the pod. If this settings object contains a webId field, that WebID will be the WebID that has initial access to the pod.
If no WebID value is provided, a WebID will be generated in the pod and immediately linked to the account as described in controls.account.webID. This WebID will then be the WebID that has initial access.
"},{"location":"usage/account/json-api/#controlsaccountclientcredentials","title":"controls.account.clientCredentials","text":"GET requests return all client credentials created by this account in the following format:
{\n \"clientCredentials\": {\n \"token_562cdeb5-d4b2-4905-9e62-8969ac10daaa\": \"http://localhost:3000/.account/account/c63c9e6f-48f8-40d0-8fec-238da893a7f2/client-credentials/063ee3a7-e80f-4508-9f79-ffddda9df8d4/\"\n }\n}\n The URL value is the resource URL corresponding to that specific token. Sending a GET request to that URL will return information about the token, such as what the associated WebID is. The token can be removed by sending a DELETE request to that URL.
Creates a client credentials token on POST requests. More information on these tokens can be found here. Expected input is an object containing a name and webId field. The name is optional and will be used to name the token, the WebID determines which WebID you will identify as when using that token. It needs to be a WebID linked to the account as described in controls.account.webID.
Controls related to managing the email/password login method.
"},{"location":"usage/account/json-api/#controlspasswordcreate","title":"controls.password.create","text":"GET requests return all email/password logins of this account in the following format:
{\n \"passwordLogins\": {\n \"test@example.com\": \"http://localhost:3000/.account/account/c63c9e6f-48f8-40d0-8fec-238da893a7f2/login/password/7f042779-e2b2-444d-8cd9-50bd9cfa516d/\"\n }\n}\n The URL value is the resource URL corresponding to the login with the given email address. The login can be removed by sending a DELETE request to that URL. The password can be updated by sending a POST request to that URL with the body containing an oldPassword and a newPassword field.
POST requests create an email/password login and adds it to the account you are logged in as. Expects email and password fields.
POST requests log a user in and return the relevant cookie values. Expected fields are email, password, and optionally a remember boolean. The remember value determines if the returned cookie is only valid for the session, or for a longer time.
Can be used when a user forgets their password. POST requests with an email field will send an email with a link to reset the password.
Used to handle reset password URLs generated when a user forgets their password. Expected input values for the POST request are recordId, which was generated when sending the reset mail, and password with the new password value.
These controls are related to completing OIDC interactions.
"},{"location":"usage/account/json-api/#controlsoidccancel","title":"controls.oidc.cancel","text":"Sending a POST request to this API will cancel the OIDC interaction and return the user to the client that started the interaction.
"},{"location":"usage/account/json-api/#controlsoidcprompt","title":"controls.oidc.prompt","text":"This API is used to determine what the next necessary step is in the OIDC interaction. The response will contain a location field, containing the URL to the next page the user should go to, and a prompt field, indicating the next step that is necessary to progress the OIDC interaction. The three possible prompts are the following:
Relevant for solving the login prompt. GET request will return a list of WebIDs the user can choose from. This is the same result as requesting the account information and looking at the linked WebIDs. The POST requests expects a webId value and optionally a remember boolean. The latter determines if the server should remember the picked WebID for later interactions.
POST requests to this API will cause the OIDC interaction to forget the picked WebID so a new one can be picked by the user.
"},{"location":"usage/account/json-api/#controlsoidcconsent","title":"controls.oidc.consent","text":"A GET request to this API will return all the relevant information about the client doing the request. A POST requests causes the OIDC interaction to finish. It can have an optional remember value, which allows for refresh tokens if it is set to true.
All these controls link to HTML pages and are thus mostly relevant to provide links to let the user navigate around. The most important one is probably controls.html.account.account which links to an overview page for the account.
Below is an example of a controls object in a response.
{\n \"main\": {\n \"index\": \"http://localhost:3000/.account/\",\n \"logins\": \"http://localhost:3000/.account/login/\"\n },\n \"account\": {\n \"create\": \"http://localhost:3000/.account/account/\",\n \"logout\": \"http://localhost:3000/.account/account/ade5c046-e882-4b56-80f4-18cb16433360/logout/\",\n \"webId\": \"http://localhost:3000/.account/account/ade5c046-e882-4b56-80f4-18cb16433360/webid/\",\n \"pod\": \"http://localhost:3000/.account/account/ade5c046-e882-4b56-80f4-18cb16433360/pod/\",\n \"clientCredentials\": \"http://localhost:3000/.account/account/ade5c046-e882-4b56-80f4-18cb16433360/client-credentials/\"\n },\n \"password\": {\n \"create\": \"http://localhost:3000/.account/account/ade5c046-e882-4b56-80f4-18cb16433360/login/password/\",\n \"login\": \"http://localhost:3000/.account/login/password/\",\n \"forgot\": \"http://localhost:3000/.account/login/password/forgot/\",\n \"reset\": \"http://localhost:3000/.account/login/password/reset/\"\n },\n \"oidc\": {\n \"cancel\": \"http://localhost:3000/.account/oidc/cancel/\",\n \"prompt\": \"http://localhost:3000/.account/oidc/prompt/\",\n \"webId\": \"http://localhost:3000/.account/oidc/pick-webid/\",\n \"forgetWebId\": \"http://localhost:3000/.account/oidc/forget-webid/\",\n \"consent\": \"http://localhost:3000/.account/oidc/consent/\"\n },\n \"html\": {\n \"main\": {\n \"login\": \"http://localhost:3000/.account/login/\"\n },\n \"account\": {\n \"createClientCredentials\": \"http://localhost:3000/.account/account/ade5c046-e882-4b56-80f4-18cb16433360/client-credentials/\",\n \"createPod\": \"http://localhost:3000/.account/account/ade5c046-e882-4b56-80f4-18cb16433360/pod/\",\n \"linkWebId\": \"http://localhost:3000/.account/account/ade5c046-e882-4b56-80f4-18cb16433360/webid/\",\n \"account\": \"http://localhost:3000/.account/account/ade5c046-e882-4b56-80f4-18cb16433360/\"\n },\n \"password\": {\n \"register\": \"http://localhost:3000/.account/login/password/register/\",\n \"login\": \"http://localhost:3000/.account/login/password/\",\n \"create\": \"http://localhost:3000/.account/account/ade5c046-e882-4b56-80f4-18cb16433360/login/password/\",\n \"forgot\": \"http://localhost:3000/.account/login/password/forgot/\"\n }\n }\n}\n"},{"location":"usage/account/login-method/","title":"Adding a new login method","text":"By default, the server allows users to use email/password combinations to identify as the owner of their account. But, just like with many other parts of the server, this can be extended so other login methods can be used. Here we'll cover everything that is necessary.
"},{"location":"usage/account/login-method/#components","title":"Components","text":"These are the components that are needed for adding a new login method. Not all of these are mandatory, but they can make the life of the user easier when trying to find and use the new method. Also have a look at the general structure of new API components to see what is expected of such a component.
"},{"location":"usage/account/login-method/#create-component","title":"Create component","text":"There needs to be one or more components that allow a user to create an instance of the new login method and assign it to their account. The CreatePasswordHandler can be used as an example. This does not necessarily have to happen in a single request, potentially multiple requests can be used if the user has to perform actions on an external site for example. The only thing that matters is that at the end there is a new entry in the account's logins object.
When adding logins of your method a new key will need to be chosen to group these logins together. The email/password method uses password for example.
A new storage will probably need to be created to storage relevant metadata about this login method entry. Below is an example of how the PasswordStore is created:
{\n \"@id\": \"urn:solid-server:default:PasswordStore\",\n \"@type\": \"BasePasswordStore\",\n \"storage\": {\n \"@id\": \"urn:solid-server:default:PasswordStorage\",\n \"@type\": \"EncodingPathStorage\",\n \"relativePath\": \"/accounts/logins/password/\",\n \"source\": {\n \"@id\": \"urn:solid-server:default:KeyValueStorage\"\n }\n }\n}\n"},{"location":"usage/account/login-method/#login-component","title":"Login component","text":"After creating a login instance, a user needs to be able to log in using the new method. This can again be done with multiple API calls if necessary, but the final one needs to be one that handles the necessary actions such as creating a cookie and finishing the OIDC interaction if necessary. The ResolveLoginHandler can be extended to take care of most of this, the PasswordLoginHandler provides an example of this.
Besides creating a login instance and logging in, it is always possible to offer additional functionality specific to this login method. The email/password method, for example, also has components for password recovery and updating a password.
"},{"location":"usage/account/login-method/#html-pages","title":"HTML pages","text":"To make the life easier for users, at the very least you probably want to make an HTML page which people can use to create an instance of your login method. Besides that you could also make a page where people can combine creating an account with creating a login instance. The templates/identity folder contains all the pages the server has by default, which can be used as inspiration.
These pages need to be linked to the urn:solid-server:default:HtmlViewHandler. Below is an example of this:
{\n \"@id\": \"urn:solid-server:default:HtmlViewHandler\",\n \"@type\": \"HtmlViewHandler\",\n \"templates\": [{\n \"@id\": \"urn:solid-server:default:CreatePasswordHtml\",\n \"@type\": \"HtmlViewEntry\",\n \"filePath\": \"@css:templates/identity/password/create.html.ejs\",\n \"route\": {\n \"@id\": \"urn:solid-server:default:AccountPasswordRoute\"\n }\n }]\n}\n"},{"location":"usage/account/login-method/#updating-the-login-handler","title":"Updating the login handler","text":"The urn:solid-server:default:LoginHandler returns a list of available login methods, which are used to offer users a choice of which login method they want to use on the default login page. If you want the new method to also be offered you will have to add similar Components.js configuration:
{\n \"@id\": \"urn:solid-server:default:LoginHandler\",\n \"@type\": \"ControlHandler\",\n \"controls\": [\n {\n \"ControlHandler:_controls_key\": \"Email/password combination\",\n \"ControlHandler:_controls_value\": {\n \"@id\": \"urn:solid-server:default:LoginPasswordRoute\"\n }\n }\n ]\n}\n"},{"location":"usage/account/login-method/#controls","title":"Controls","text":"All new relevant API endpoints should be added to the controls object, otherwise there is no way for users to find out where to send their requests. Similarly, links to the HTML pages should also be in the controls, so they can be navigated to. Examples of how to do this can be found here.
The default account overview page makes some assumptions about the controls when building the page. Specifically, it checks if controls.html.<LOGIN_METHOD>.create exists, if yes, it automatically creates a link on the page so users can create new login instances for their account.
Below is a description of the changes that are necessary to migration data from v6 to v7 of the server.
"},{"location":"usage/account/migration/#account-data","title":"Account data","text":"The format of the \"Forgot passwords records was changed\", but seeing as those are not important and new ones can be created if necessary, these can just be removed when migrating. By default, these were located in the .internal/forgot-password/ folder so this entire folder can be removed.
For existing accounts, the data was stored in the following format and location. Additionally to the details below, the tail of all resource identifiers were base64 encoded.
.internal/accounts/\"account/\" + encodeURIComponent(email){ webId, email, password, verified }.internal/accounts/, so same location as the account datawebId{ useIdp, podBaseUrl?, clientCredentials? }useIdp indicates if the WebID is linked to the account for identification.podBaseUrl is defined if the account was created with a pod.clientCredentials is an array containing the labels of all client credentials tokens created by the account..internal/accounts/credentials/{ webId, secret }The V6MigrationInitializer class is responsible for migrating from this format to the new one and does so by reading in the old data and creating new instances in the IndexedStorage. In case you have an instance that made impactful changes to how storage is handled that would be the class to investigate and replace. Password data can be reused as the algorithm there was not changed. Email addresses are now stored in lowercase, so these need to be converted during migration.
The format of all other internal data was changed in the same way:
All internal storage that is not account data as described in the previous section will be removed to prevent issues with outdated formats. This applies to the following stored data:
.internal/idp/keys/.internal/idp/adapter/.internal/notifications/rootInitialized key, which prevents initialized roots from being overwritten..internal/setup/Welcome to the Community Solid Server! Here we will cover many aspects of the server, such as how to propose changes, what the architecture looks like, and how to use many of the features the server provides.
The documentation here is still incomplete both in content and structure, so feel free to open a discussion about things you want to see added. While we try to update this documentation together with updates in the code, it is always possible we miss something, so please report it if you find incorrect information or links that no longer work.
An introductory tutorial that gives a quick overview of the Solid and CSS basics can be found here. This is a good way to get started with the server and its setup.
If you want to know what is new in the latest version, you can check out the release notes for a high level overview and information on how to migrate your configuration to the next version. A list that includes all minor changes can be found in the changelog
"},{"location":"#using-the-server","title":"Using the server","text":"For core developers with push access only:
Below is a non-exhaustive listing of the features available to a server instance, depending on the chosen configuration. The core feature of the CSS is that it uses dependency injection to configure its components, so any of the features below can always be adapted or replaced with custom components if required. It can also be used to configure dummy components for debugging, development, or experimentation purposes. See this tutorial and/or this example repository for more information on that.
To generate configurations with some of these features enabled/disabled, you can use the configuration generator.
"},{"location":"features/#authentication","title":"Authentication","text":"Clients are identified based on the contents of DPoP tokens, as described in the Solid-OIDC specification.
The server also provides several dummy components that can be used here, to either always identify the client as a fixed WebID, or to allow the WebID to be set directly in the Authorization header. These can be configured by changing the ldp/authentication import in your configuration.
Two authorization mechanisms are implemented for determining who has access to resources:
Alternatively, the server can be configured to not have any kind of authorization and allow full access to all resources.
"},{"location":"features/#solid-protocol","title":"Solid Protocol","text":"The Solid Protocol is supported.
Requests to the server support content negotiation for common RDF formats.
Binary range headers are supported.
ETag and Last-Modified headers are supported. These can be used for conditional requests.
PATCH requests targeting RDF resources can be made with N3 Patch or SPARQL Update bodies.
The server can be configured to store data in memory, on the file system, or through a SPARQL endpoint. Similarly, the locking system that is used to prevent data conflicts can be configured to store locks in memory, on the file system, or in a Redis store, or it can be disabled.
Multiple worker threads can be used when starting the server.
"},{"location":"features/#account-management","title":"Account management","text":"Accounts can be created on the server with which users can perform the following actions, through either a JSON or an HTML API:
Using these accounts, the server can generate tokens to support Solid-OIDC authentication.
"},{"location":"features/#pods","title":"Pods","text":"The server keeps track of the pod owners, which is a list of WebIDs which have full control access over all resources contained within. Owners can be added to and removed from a pod.
Pod URLs can be minted as either subdomain, http://pod.example.com/, or suffix, http://example.com/pod/.
When starting the server, a configuration file can be provided to immediately create one or more accounts on the server with their own pods. See the documentation for more information.
"},{"location":"features/#notifications","title":"Notifications","text":"CSS supports v0.2.0 of the Solid Notifications Protocol. Specifically it supports the Notification Types WebSocketChannel2023 and WebhookChannel2023.
More documentation on notifications can be found here.
"},{"location":"architecture/core/","title":"Core building blocks","text":"There are several core building blocks used in many places of the server. These are described here.
"},{"location":"architecture/core/#handlers","title":"Handlers","text":"A very important building block that gets reused in many places is the AsyncHandler. The idea is that a handler has 2 important functions. canHandle determines if this class is capable of correctly handling the request, and throws an error if it can not. For example, a class that converts JSON-LD to turtle can handle all requests containing JSON-LD data, but does not know what to do with a request that contains a JPEG. The second function is handle where the class executes on the input data and returns the result. If an error gets thrown here it means there is an issue with the input. For example, if the input data claims to be JSON-LD but is actually not.
The power of using this interface really shines when using certain utility classes. The one we use the most is the WaterfallHandler, which takes as input a list of handlers of the same type. The input and output of a WaterfallHandler is the same as those of its inputs, meaning it can be used in the same places. When doing a canHandle call, it will iterate over all its input handlers to find the first one where the canHandle call succeeds, and when calling handle it will return the result of that specific handler. This allows us to chain together many handlers that each have their specific niche, such as handler that each support a specific HTTP method (GET/PUT/POST/etc.), or handlers that only take requests targeting a specific subset of URLs. To the parent class it will look like it has a handler that supports all methods, while in practice it will be a WaterfallHandler containing all these separate handlers.
Some other utility classes are the ParallelHandler that runs all handlers simultaneously, and the SequenceHandler that runs all of them one after the other. Since multiple handlers are executed here, these only work for handlers that have no output.
Almost all data is handled in a streaming fashion. This allows us to work with very large resources without having to fully load them in memory, a client could be reading data that is being returned by the server while the server is still reading the file. Internally this means we are mostly handling data as Readable objects. We actually use Guarded<Readable> which is an internal format we created to help us with error handling. Such streams can be created using utility functions such as guardStream and guardedStreamFrom. Similarly, we have a pipeSafely to pipe streams in such a way that also helps with errors.
The community server uses the dependency injection framework Components.js to link all class instances together, and uses Components-Generator.js to automatically generate the necessary description configurations of all classes. This framework allows us to configure our components in a JSON file. The advantage of this is that changing the configuration of components does not require any changes to the code, as one can just change the default configuration file, or provide a custom configuration file.
More information can be found in the Components.js documentation, but a summarized overview can be found below.
"},{"location":"architecture/dependency-injection/#component-files","title":"Component files","text":"Components.js requires a component file for every class you might want to instantiate. Fortunately those get generated automatically by Components-Generator.js. Calling npm run build will call the generator and generate those JSON-LD files in the dist folder. The generator uses the index.ts, so new classes always have to be added there or they will not get a component file.
Configuration files are how we tell Components.js which classes to instantiate and link together. All the community server configurations can be found in the config folder. That folder also contains information about how different pre-defined configurations can be used.
A single component in such a configuration file might look as follows:
{\n \"comment\": \"Storage used for account management.\",\n \"@id\": \"urn:solid-server:default:AccountStorage\",\n \"@type\": \"JsonResourceStorage\",\n \"source\": { \"@id\": \"urn:solid-server:default:ResourceStore\" },\n \"baseUrl\": { \"@id\": \"urn:solid-server:default:variable:baseUrl\" },\n \"container\": \"/.internal/accounts/\"\n}\n With the corresponding constructor of the JsonResourceStorage class:
public constructor(source: ResourceStore, baseUrl: string, container: string)\n The important elements here are the following:
\"comment\": (optional) A description of this component.\"@id\": (optional) A unique identifier of this component, which allows it to be used as parameter values in different places.\"@type\": The class name of the component. This must be a TypeScript class name that is exported via index.ts.As you can see from the constructor, the other fields are direct mappings from the constructor parameters. source references another object, which we refer to using its identifier urn:solid-server:default:ResourceStore. baseUrl is just a string, but here we use a variable that was set before calling Components.js which is why it also references an @id. These variables are set when starting up the server, based on the command line parameters.
The initial architecture document the project was started from can be found here. Many things have been added since the original inception of the project, but the core ideas within that document are still valid.
As can be seen from the architecture, an important idea is the modularity of all components. No actual implementations are defined there, only their interfaces. Making all the components independent of each other in such a way provides us with an enormous flexibility: they can all be replaced by a different implementation, without impacting anything else. This is how we can provide many different configurations for the server, and why it is impossible to provide ready solutions for all possible combinations.
"},{"location":"architecture/overview/#architecture-diagrams","title":"Architecture diagrams","text":"Having a modular architecture makes it more difficult to give a complete architecture overview. We will limit ourselves to the more commonly used default configurations we provide, and in certain cases we might give examples of what differences there are based on what configurations are being imported.
To do this we will make use of architecture diagrams. We will use an example below to explain the formatting used throughout the architecture documentation:
flowchart TD\n LdpHandler(\"<strong>LdpHandler</strong><br>ParsingHttphandler\")\n LdpHandler --> LdpHandlerArgs\n\n subgraph LdpHandlerArgs[\" \"]\n RequestParser(\"<strong>RequestParser</strong><br>BasicRequestParser\")\n Auth(\"<br>AuthorizingHttpHandler\")\n ErrorHandler(\"<strong>ErrorHandler</strong><br><i>ErrorHandler</>\")\n ResponseWriter(\"<strong>ResponseWriter</strong><br>BasicResponseWriter\")\n end Below is a summary of how to interpret such diagrams:
urn:solid-server:default: to the shorthand identifier.For example, in the above, LdpHandler is a shorthand for the actual identifier urn:solid-server:default:LdpHandler and is an instance of ParsingHttpHandler. It has 4 parameters, one of which has no identifier but is an instance of AuthorizingHttpHandler.
Below are the sections that go deeper into the features of the server and how those work.
When starting the server, the application actually uses Components.js twice to instantiate components. The first instantiation is used to parse the command line arguments. These then get converted into Components.js variables and are used to instantiate the actual server.
"},{"location":"architecture/features/cli/#architecture","title":"Architecture","text":"flowchart TD\n CliResolver(\"<strong>CliResolver</strong><br>CliResolver\")\n CliResolver --> CliResolverArgs\n\n subgraph CliResolverArgs[\" \"]\n CliExtractor(\"<strong>CliExtractor</strong><br>YargsCliExtractor\")\n ShorthandResolver(\"<strong>ShorthandResolver</strong><br>CombinedShorthandResolver\")\n end\n\n ShorthandResolver --> ShorthandResolverArgs\n subgraph ShorthandResolverArgs[\" \"]\n BaseUrlExtractor(\"<br>BaseUrlExtractor\")\n KeyExtractor(\"<br>KeyExtractor\")\n AssetPathExtractor(\"<br>AssetPathExtractor\")\n end The CliResolver (urn:solid-server-app-setup:default:CliResolver) is simply a way to combine both the CliExtractor (urn:solid-server-app-setup:default:CliExtractor) and ShorthandResolver (urn:solid-server-app-setup:default:ShorthandResolver) into a single object and has no other function.
Which arguments are supported and which Components.js variables are generated can depend on the configuration that is being used. For example, for an HTTPS server additional arguments will be needed to specify the necessary key/cert files.
"},{"location":"architecture/features/cli/#cliresolver","title":"CliResolver","text":"The CliResolver converts the incoming string of arguments into a key/value object. By default, a YargsCliExtractor is used, which makes use of the yargs library and is configured similarly.
The ShorthandResolver uses the key/value object that was generated above to generate Components.js variable bindings. A CombinedShorthandResolver combines the results of multiple ShorthandExtractor by mapping their values to specific variables. For example, a BaseUrlExtractor will be used to extract the value for baseUrl, or port if no baseUrl value is provided, and use it to generate the value for the variable urn:solid-server:default:variable:baseUrl.
These extractors are also where the default values for the server are defined. For example, BaseUrlExtractor will be instantiated with a default port of 3000 which will be used if no port is provided.
The variables generated here will be used to initialize the server.
"},{"location":"architecture/features/http-handler/","title":"Handling HTTP requests","text":"The direction of the arrows was changed slightly here to make the graph readable.
flowchart LR\n HttpHandler(\"<strong>HttpHandler</strong><br>SequenceHandler\")\n HttpHandler --> HttpHandlerArgs\n\n subgraph HttpHandlerArgs[\" \"]\n direction LR\n Middleware(\"<strong>Middleware</strong><br><i>HttpHandler</i>\")\n WaterfallHandler(\"<br>WaterfallHandler\")\n end\n\n Middleware --> WaterfallHandler\n WaterfallHandler --> WaterfallHandlerArgs\n\n subgraph WaterfallHandlerArgs[\" \"]\n direction TB\n StaticAssetHandler(\"<strong>StaticAssetHandler</strong><br>StaticAssetHandler\")\n OidcHandler(\"<strong>OidcHandler</strong><br><i>HttpHandler</i>\")\n NotificationHttpHandler(\"<strong>NotificationHttpHandler</strong><br><i>HttpHandler</i>\")\n StorageDescriptionHandler(\"<strong>StorageDescriptionHandler</strong><br><i>HttpHandler</i>\")\n AuthResourceHttpHandler(\"<strong>AuthResourceHttpHandler</strong><br><i>HttpHandler</i>\")\n IdentityProviderHttpHandler(\"<strong>IdentityProviderHttpHandler</strong><br><i>HttpHandler</i>\")\n LdpHandler(\"<strong>LdpHandler</strong><br><i>HttpHandler</i>\")\n end\n\n StaticAssetHandler --> OidcHandler\n OidcHandler --> NotificationHttpHandler\n NotificationHttpHandler --> StorageDescriptionHandler\n StorageDescriptionHandler --> AuthResourceHttpHandler\n AuthResourceHttpHandler --> IdentityProviderHttpHandler\n IdentityProviderHttpHandler --> LdpHandler The HttpHandler is responsible for handling an incoming HTTP request. The request will always first go through the Middleware, where certain required headers will be added such as CORS headers.
After that it will go through the list in the WaterfallHandler to find the first handler that understands the request, with the LdpHandler at the bottom being the catch-all default.
The urn:solid-server:default:StaticAssetHandler matches exact URLs to static assets which require no further logic. An example of this is the favicon, where the /favicon.ico URL is directed to the favicon file at /templates/images/favicon.ico. It can also map entire folders to a specific path, such as /.well-known/css/styles/ which contains all stylesheets.
The urn:solid-server:default:OidcHandler handles all requests related to the Solid-OIDC specification. The OIDC component is configured to work on the /.oidc/ subpath, so this handler catches all those requests and sends them to the internal OIDC library that is used.
The urn:solid-server:default:NotificationHttpHandler catches all notification subscription requests. By default these are requests targeting /.notifications/. Which specific subscription type is targeted is then based on the next part of the URL.
The urn:solid-server:default:StorageDescriptionHandler returns the relevant RDF data for requests targeting a storage description resource. It does this by knowing which URL suffix is used for such resources, and verifying that the associated container is an actual storage container.
The urn:solid-server:default:AuthResourceHttpHandler is identical to the urn:solid-server:default:LdpHandler which will be discussed below, but only handles resources relevant for authorization.
In practice this means that if your server is configured to use Web Access Control for authorization, this handler will catch all requests targeting .acl resources.
The reason these already need to be handled here is so these can also be used to allow authorization on the following handler(s). More on this can be found in the identity provider documentation
"},{"location":"architecture/features/http-handler/#identityproviderhttphandler","title":"IdentityProviderHttpHandler","text":"The urn:solid-server:default:IdentityProviderHttpHandler handles everything related to our custom identity provider API, such as registering, logging in, returning the relevant HTML pages, etc. All these requests are identified by being on the /.account/ subpath. More information on the API can be found in the identity provider documentation The architectural overview can be found here.
Once a request reaches the urn:solid-server:default:LdpHandler, the server assumes this is a standard Solid request according to the Solid protocol. A detailed description of what happens then can be found here
When starting the server, multiple Initializers trigger to set up everything correctly, the last one of which starts listening to the specified port. Similarly, when stopping the server several Finalizers trigger to clean up where necessary, although the latter only happens when starting the server through code.
"},{"location":"architecture/features/initialization/#app","title":"App","text":"flowchart TD\n App(\"<strong>App</strong><br>App\")\n App --> AppArgs\n\n subgraph AppArgs[\" \"]\n Initializer(\"<strong>Initializer</strong><br><i>Initializer</i>\")\n AppFinalizer(\"<strong>Finalizer</strong><br><i>Finalizer</i>\")\n end App (urn:solid-server:default:App) is the main component that gets instantiated by Components.js. Every other component should be able to trace an instantiation path back to it if it also wants to be instantiated.
It's only function is to contain an Initializer and Finalizer which get called by calling start/stop respectively.
flowchart TD\n Initializer(\"<strong>Initializer</strong><br>SequenceHandler\")\n Initializer --> InitializerArgs\n\n subgraph InitializerArgs[\" \"]\n direction LR\n LoggerInitializer(\"<strong>LoggerInitializer</strong><br>LoggerInitializer\")\n PrimaryInitializer(\"<strong>PrimaryInitializer</strong><br>ProcessHandler\")\n WorkerInitializer(\"<strong>WorkerInitializer</strong><br>ProcessHandler\")\n end\n\n LoggerInitializer --> PrimaryInitializer\n PrimaryInitializer --> WorkerInitializer The very first thing that needs to happen is initializing the logger. Before this other classes will be unable to use logging.
The PrimaryInitializer will only trigger once, in the primary worker thread, while the WorkerInitializer will trigger for every worker thread. Although if your server setup is single-threaded, which is the default, there is no relevant difference between those two.
flowchart TD\n PrimaryInitializer(\"<strong>PrimaryInitializer</strong><br>ProcessHandler\")\n PrimaryInitializer --> PrimarySequenceInitializer(\"<strong>PrimarySequenceInitializer</strong><br>SequenceHandler\")\n PrimarySequenceInitializer --> PrimarySequenceInitializerArgs\n\n subgraph PrimarySequenceInitializerArgs[\" \"]\n direction LR\n CleanupInitializer(\"<strong>CleanupInitializer</strong><br>SequenceHandler\")\n PrimaryParallelInitializer(\"<strong>PrimaryParallelInitializer</strong><br>ParallelHandler\")\n WorkerManager(\"<strong>WorkerManager</strong><br>WorkerManager\")\n end\n\n CleanupInitializer --> PrimaryParallelInitializer\n PrimaryParallelInitializer --> WorkerManager The above is a simplification of all the initializers that are present in the PrimaryInitializer as there are several smaller initializers that also trigger but are less relevant here.
The CleanupInitializer is an initializer that cleans up anything that might have remained from a previous server start and could impact behaviour. Relevant components in other parts of the configuration are responsible for adding themselves to this array if needed. An example of this is file-based locking components which might need to remove any dangling locking files.
The PrimaryParallelInitializer can be used to add any initializers to that have to happen in the primary process. This makes it easier for users to add initializers by being able to append to its handlers.
The WorkerManager is responsible for setting up the worker threads, if any.
flowchart TD\n WorkerInitializer(\"<strong>WorkerInitializer</strong><br>ProcessHandler\")\n WorkerInitializer --> WorkerSequenceInitializer(\"<strong>WorkerSequenceInitializer</strong><br>SequenceHandler\")\n WorkerSequenceInitializer --> WorkerSequenceInitializerArgs\n\n subgraph WorkerSequenceInitializerArgs[\" \"]\n direction LR\n WorkerParallelInitializer(\"<strong>WorkerParallelInitializer</strong><br>ParallelHandler\")\n ServerInitializer(\"<strong>ServerInitializer</strong><br>ServerInitializer\")\n end\n\n WorkerParallelInitializer --> ServerInitializer The WorkerInitializer is quite similar to the PrimaryInitializer but triggers once per worker thread. Like the PrimaryParallelInitializer, the WorkerParallelInitializer can be used to add any custom initializers that need to run.
The ServerInitializer is the initializer that finally starts up the server by listening to the relevant port, once all the initialization described above is finished. It takes as input 2 components: a HttpServerFactory and a ServerListener.
flowchart TD\n ServerInitializer(\"<strong>ServerInitializer</strong><br>ServerInitializer\")\n ServerInitializer --> ServerInitializerArgs\n\n subgraph ServerInitializerArgs[\" \"]\n direction LR\n ServerFactory(\"<strong>ServerFactory</strong><br>BaseServerFactory\")\n ServerListener(\"<strong>ServerListener</strong><br>ParallelHandler\")\n end\n\n ServerListener --> HandlerServerListener(\"<strong>HandlerServerListener</strong><br>HandlerServerListener\")\n\n HandlerServerListener --> HttpHandler(\"<strong>HttpHandler</strong><br><i>HttpHandler</i>\") The HttpServerFactory is responsible for starting a server on a given port. Depending on the configuration this can be an HTTP or an HTTPS server. The created server emits events when it receives requests.
A ServerListener is a class that takes the created server as input and attaches a listener to interpret events. One listener that is always used is the urn:solid-server:default:HandlerServerListener, which calls an HttpHandler to resolve HTTP requests.
Sometimes it is necessary to add additional listeners, these can then be added to the urn:solid-server:default:ServerListener as it is a ParallellHandler. An example of this is when WebSockets are used to handle notifications.
This section covers the architecture used to support the Notifications protocol as described in https://solidproject.org/TR/2022/notifications-protocol-20221231.
There are three core architectural components, that have distinct entry points:
Discovery is done through the storage description resource(s). The server returns the same triples for every such resource as the notification subscription URL is always located in the root of the server.
flowchart LR\n StorageDescriptionHandler(\"<br>StorageDescriptionHandler\")\n StorageDescriptionHandler --> StorageDescriber(\"<strong>StorageDescriber</strong><br>ArrayUnionHandler\")\n StorageDescriber --> NotificationDescriber(\"NotificationDescriber<br>NotificationDescriber\")\n NotificationDescriber --> NotificationDescriberArgs\n\n subgraph NotificationDescriberArgs[\" \"]\n direction LR\n NotificationChannelType(\"<br>NotificationChannelType\")\n NotificationChannelType2(\"<br>NotificationChannelType\")\n end The server uses a StorageDescriptionHandler to generate the necessary RDF data and to handle content negotiation. To generate the data we have multiple StorageDescribers, whose results get merged together in an ArrayUnionHandler.
A NotificationChannelType contains the specific details of a specification notification channel type, including a JSON-LD representation of the corresponding subscription resource. One specific instance of a StorageDescriber is a NotificationSubcriber, which merges those JSON-LD descriptions into a single set of RDF quads. When adding a new subscription type, a new instance of such a class should be added to the urn:solid-server:default:StorageDescriber.
To subscribe, a client has to send a specific JSON-LD request to the URL found during discovery.
flowchart LR\n NotificationTypeHandler(\"<strong>NotificationTypeHandler</strong><br>WaterfallHandler\")\n NotificationTypeHandler --> NotificationTypeHandlerArgs\n\n subgraph NotificationTypeHandlerArgs[\" \"]\n direction LR\n OperationRouterHandler(\"<br>OperationRouterHandler\") --> NotificationSubscriber(\"<br>NotificationSubscriber\")\n NotificationChannelType --> NotificationChannelType(\"<br><i>NotificationChannelType</i>\")\n OperationRouterHandler2(\"<br>OperationRouterHandler\") --> NotificationSubscriber2(\"<br>NotificationSubscriber\")\n NotificationChannelType2 --> NotificationChannelType2(\"<br><i>NotificationChannelType</i>\")\n end Every subscription type should have a subscription URL relative to the root notification URL, which in our configs is set to /.notifications/. For every type there is then a OperationRouterHandler that accepts requests to that specific URL, after which a NotificationSubscriber handles all checks related to subscribing, for which it uses a NotificationChannelType. If the subscription is valid and has authorization, the results will be saved in a NotificationChannelStorage.
flowchart TB\n ListeningActivityHandler(\"<strong>ListeningActivityHandler</strong><br>ListeningActivityHandler\")\n ListeningActivityHandler --> ListeningActivityHandlerArgs\n\n subgraph ListeningActivityHandlerArgs[\" \"]\n NotificationChannelStorage(\"<strong>NotificationChannelStorage</strong><br><i>NotificationChannelStorage</i>\")\n ResourceStore(\"<strong>ResourceStore</strong><br><i>ActivityEmitter</i>\")\n NotificationHandler(\"<strong>NotificationHandler</strong><br>WaterfallHandler\")\n end\n\n NotificationHandler --> NotificationHandlerArgs\n subgraph NotificationHandlerArgs[\" \"]\n direction TB\n NotificationHandler1(\"<br><i>NotificationHandler</i>\")\n NotificationHandler2(\"<br><i>NotificationHandler</i>\")\n end An ActivityEmitter is a class that emits events every time data changes in the server. The MonitoringStore is an implementation of this in the server. The ListeningActivityHandler is the class that listens to these events and makes sure relevant notifications get sent out.
It will pull the relevant subscriptions from the storage and call the stored NotificationHandler for each of time. For every subscription type, a NotificationHandler should be added to the WaterfallHandler that handles notifications for the specific type.
To add support for WebSocketChannel2023 notifications, components were added as described in the documentation above.
For discovery, a NotificationDescriber was added with the corresponding settings.
As NotificationChannelType, there is a specific WebSocketChannel2023Type that contains all the necessary information.
As NotificationHandler, the following architecture is used:
flowchart TB\n TypedNotificationHandler(\"<br>TypedNotificationHandler\")\n TypedNotificationHandler --> ComposedNotificationHandler(\"<br>ComposedNotificationHandler\")\n ComposedNotificationHandler --> ComposedNotificationHandlerArgs\n\n subgraph ComposedNotificationHandlerArgs[\" \"]\n direction LR\n BaseNotificationGenerator(\"<strong>BaseNotificationGenerator</strong><br><i>NotificationGenerator</i>\")\n BaseNotificationSerializer(\"<strong>BaseNotificationSerializer</strong><br><i>NotificationSerializer</i>\")\n WebSocket2023Emitter(\"<strong>WebSocket2023Emitter</strong><br>WebSocket2023Emitter\")\n BaseNotificationGenerator --> BaseNotificationSerializer --> WebSocket2023Emitter\n end A TypedNotificationHandler is a handler that can be used to filter out subscriptions for a specific type, making sure only WebSocketChannel2023 subscriptions will be handled.
A ComposedNotificationHandler combines 3 interfaces to handle the notifications:
NotificationGenerator converts the information into a Notification object.NotificationSerializer converts a Notification object into a serialized Representation.NotificationEmitter takes a Representation and sends it out in a way specific to that subscription type.urn:solid-server:default:BaseNotificationGenerator is a generator that fills in the default Notification template, and also caches the result so it can be reused by multiple subscriptions.
urn:solid-server:default:BaseNotificationSerializer converts the Notification to a JSON-LD representation and handles any necessary content negotiation based on the accept notification feature.
A WebSocket2023Emitter is a specific emitter that checks whether the current open WebSockets correspond to the subscription.
flowchart TB\n WebSocket2023Listener(\"<strong>WebSocket2023Listener</strong><br>WebSocket2023Listener\")\n WebSocket2023Listener --> WebSocket2023ListenerArgs\n\n subgraph WebSocket2023ListenerArgs[\" \"]\n direction LR\n NotificationChannelStorage(\"<strong>NotificationChannelStorage</strong><br>NotificationChannelStorage\")\n SequenceHandler(\"<br>SequenceHandler\")\n end\n\n SequenceHandler --> SequenceHandlerArgs\n\n subgraph SequenceHandlerArgs[\" \"]\n direction TB\n WebSocket2023Storer(\"<strong>WebSocket2023Storer</strong><br>WebSocket2023Storer\")\n WebSocket2023StateHandler(\"<strong>WebSocket2023StateHandler</strong><br>BaseStateHandler\")\n end To detect and store WebSocket connections, the WebSocket2023Listener is added as a listener to the HTTP server. For all WebSocket connections that get opened, it verifies whether they correspond to an existing subscription. If yes, the information gets sent out to its stored WebSocket2023Handler.
In this case, this is a SequenceHandler, which contains a WebSocket2023Storer and a BaseStateHandler. The WebSocket2023Storer will store the WebSocket in the same map used by the WebSocket2023Emitter, so that class can emit events later on, as mentioned above. The state handler will make sure that a notification gets sent out if the subscription has a state feature request, as defined in the notification specification.
The additions required to support WebhookChannel2023 are quite similar to those needed for WebSocketChannel2023:
WebhookDescriber, which is an extension of a NotificationDescriber.WebhookChannel2023Type class contains all the necessary typing information.WebhookEmitter is the NotificationEmitter that sends the request.WebhookUnsubscriber and WebhookWebId are additional utility classes to support the spec requirements.A large part of every response of the JSON API is the controls block. These are generated by using nested ControlHandler objects. These take as input a key/value with the values being either routes or other interaction handlers. These will then be executed to determine the values of the output JSON object, with the same keys. By using other ControlHandlers in the input map, we can create nested objects.
The default structure of these handlers is as follows:
flowchart LR\n RootControlHandler(\"<strong>RootControlHandler</strong><br>ControlHandler\")\n RootControlHandler --controls--> ControlHandler(\"<strong>ControlHandler</strong><br>ControlHandler\")\n ControlHandler --main--> MainControlHandler(\"<strong>MainControlHandler</strong><br>ControlHandler\")\n ControlHandler --account--> AccountControlHandler(\"<strong>AccountControlHandler</strong><br>ControlHandler\")\n ControlHandler --password--> PasswordControlHandler(\"<strong>PasswordControlHandler</strong><br>ControlHandler\")\n ControlHandler --\"oidc\"--> OidcControlHandler(\"<strong>OidcControlHandler</strong><br>OidcControlHandler\")\n ControlHandler --html--> HtmlControlHandler(\"<strong>HtmlControlHandler</strong><br>ControlHandler\")\n\n HtmlControlHandler --main--> MainHtmlControlHandler(\"<strong>MainHtmlControlHandler</strong><br>ControlHandler\")\n HtmlControlHandler --account--> AccountHtmlControlHandler(\"<strong>AccountHtmlControlHandler</strong><br>ControlHandler\")\n HtmlControlHandler --password--> PasswordHtmlControlHandler(\"<strong>PasswordHtmlControlHandler</strong><br>ControlHandler\") Each of these control handlers then has a map of routes which link to the actual API endpoints. How to add these can be seen here.
"},{"location":"architecture/features/accounts/overview/","title":"Account management","text":"The main entry point is the IdentityProviderHandler, which routes all requests targeting a resource starting with /.account/ into this handler, after which it goes through similar parsing handlers as described here, the flow of which is shown below:
flowchart LR\n Handler(\"<strong>IdentityProviderHandler</strong><br>RouterHandler\")\n ParsingHandler(\"<strong>IdentityProviderParsingHandler</strong><br>AuthorizingHttpHandler\")\n AuthorizingHandler(\"<strong>IdentityProviderAuthorizingHandler</strong><br>AuthorizingHttpHandler\")\n\n Handler --> ParsingHandler\n ParsingHandler --> AuthorizingHandler\n AuthorizingHandler --> HttpHandler(\"<strong>IdentityProviderHttpHandler</strong><br>IdentityProviderHttpHandler\") The IdentityProviderHttpHandler is where the actual differentiation of this component starts. It handles identifying the account based on the supplied cookie and determining the active OIDC interaction, after which it calls an InteractionHandler with this additional input. The InteractionHandler is many handlers chained together as follows:
flowchart TD\n HttpHandler(\"<strong>IdentityProviderHttpHandler</strong><br>IdentityProviderHttpHandler\")\n HttpHandler --> InteractionHandler(\"<strong>InteractionHandler</strong><br>WaterfallHandler\")\n InteractionHandler --> InteractionHandlerArgs\n\n subgraph InteractionHandlerArgs[\" \"]\n HtmlViewHandler(\"<strong>HtmlViewHandler</strong><br>HtmlViewHandler\")\n LockingInteractionHandler(\"<strong>LockingInteractionHandler</strong><br>LockingInteractionHandler\")\n end\n\n LockingInteractionHandler --> JsonConversionHandler(\"<strong>JsonConversionHandler</strong><br>JsonConversionHandler\")\n JsonConversionHandler --> VersionHandler(\"<strong>VersionHandler</strong><br>VersionHandler\")\n VersionHandler --> CookieInteractionHandler(\"<strong>CookieInteractionHandler</strong><br>CookieInteractionHandler\")\n CookieInteractionHandler --> RootControlHandler(\"<strong>RootControlHandler</strong><br>ControlHandler\")\n RootControlHandler --> LocationInteractionHandler(\"<strong>LocationInteractionHandler</strong><br>LocationInteractionHandler\")\n LocationInteractionHandler --> InteractionRouteHandler(\"<strong>InteractionRouteHandler</strong><br>WaterfallHandler\") The HtmlViewHandler catches all request that request an HTML output. This class keeps a list of HTML pages and their corresponding URL and returns them when needed.
If the request is for the JSON API, the request goes through a chain of handlers, each responsible for a specific step in the API process. We'll list and summarize these here:
LockingInteractionHandler: In case the request is authenticated, this requests a lock on that account to prevent simultaneous operations on the same account.JsonConversionHandler: Converts the streaming input into a JSON object.VersionHandler: Adds a version number to all output.CookieInteractionHandler: Refreshes the cookie if necessary and adds relevant cookie metadata to the output.RootControlHandler: Responsible for adding all the controls to the output. Will take as input multiple other control handlers which create the nested values in the controls field.LocationInteractionHandler: Catches redirect errors and converts them to JSON objects with a location field.InteractionRouteHandler: A WaterfallHandler containing an entry for every supported API route.All entries contained in the urn:solid-server:default:InteractionRouteHandler have a similar structure: an InteractionRouteHandler, or AuthorizedRouteHandler for authenticated requests, which checks if the request targets a specific URL and redirects the request to its source if there is a match. Its source is quite often a ViewInteractionHandler, which returns a specific view on GET requests and performs an operation on POST requests, but other handlers can also occur.
Below we will give an example of one API route and all the components that are necessary to add it to the server.
"},{"location":"architecture/features/accounts/routes/#route-handler","title":"Route handler","text":"{\n \"@id\": \"urn:solid-server:default:AccountWebIdRouter\",\n \"@type\": \"AuthorizedRouteHandler\",\n \"route\": {\n \"@id\": \"urn:solid-server:default:AccountWebIdRoute\",\n \"@type\": \"RelativePathInteractionRoute\",\n \"base\": { \"@id\": \"urn:solid-server:default:AccountIdRoute\" },\n \"relativePath\": \"webid/\"\n },\n \"source\": { \"@id\": \"urn:solid-server:default:WebIdHandler\" }\n}\n The main entry point is the route handler, which determines the URL necessary to reach this API. In this case we create a new route, relative to the urn:solid-server:default:AccountIdRoute. That route specifically matches URLs of the format http://localhost:3000/.account/account/<accountId>/. Here we create a route relative to that one by appending webid, so the resulting route would match http://localhost:3000/.account/account/<accountId>/webid/. Since an AuthorizedRouteHandler is used here, the request also needs to be authenticated using an account cookie. If there is match, the request will be sent to the urn:solid-server:default:WebIdHandler.
{\n \"@id\": \"urn:solid-server:default:WebIdHandler\",\n \"@type\": \"ViewInteractionHandler\",\n \"source\": {\n \"@id\": \"urn:solid-server:default:LinkWebIdHandler\",\n \"@type\": \"LinkWebIdHandler\",\n \"baseUrl\": { \"@id\": \"urn:solid-server:default:variable:baseUrl\" },\n \"ownershipValidator\": { \"@id\": \"urn:solid-server:default:OwnershipValidator\" },\n \"accountStore\": { \"@id\": \"urn:solid-server:default:AccountStore\" },\n \"webIdStore\": { \"@id\": \"urn:solid-server:default:WebIdStore\" },\n \"identifierStrategy\": { \"@id\": \"urn:solid-server:default:IdentifierStrategy\" }\n }\n}\n The interaction handler is the class that performs the necessary operation based on the request. Often these are wrapped in a ViewInteractionHandler, which allows classes to have different support for GET and POST requests.
{\n \"@id\": \"urn:solid-server:default:InteractionRouteHandler\",\n \"@type\": \"WaterfallHandler\",\n \"handlers\": [\n { \"@id\": \"urn:solid-server:default:AccountWebIdRouter\" }\n ]\n}\n To make sure the API can be accessed, it needs to be added to the list of urn:solid-server:default:InteractionRouteHandler. This is the main handler that contains entries for all the APIs. This block of Components.js adds the route handler defined above to that list.
{\n \"@id\": \"urn:solid-server:default:AccountControlHandler\",\n \"@type\": \"ControlHandler\",\n \"controls\": [{\n \"ControlHandler:_controls_key\": \"webId\",\n \"ControlHandler:_controls_value\": { \"@id\": \"urn:solid-server:default:AccountWebIdRoute\" }\n }]\n}\n To make sure people can find the API, it is necessary to link it through the associated controls object. This API is related to account management, so we add its route in the account controls with the key webId. More information about controls can be found here.
{\n \"@id\": \"urn:solid-server:default:HtmlViewHandler\",\n \"@type\": \"HtmlViewHandler\",\n \"templates\": [{\n \"@id\": \"urn:solid-server:default:LinkWebIdHtml\",\n \"@type\": \"HtmlViewEntry\",\n \"filePath\": \"@css:templates/identity/account/link-webid.html.ejs\",\n \"route\": { \"@id\": \"urn:solid-server:default:AccountWebIdRoute\" }\n }]\n}\n Some API routes also have an associated HTML page, in which case the page needs to be added to the urn:solid-server:default:HtmlViewHandler, which is what we do here. Usually you will also want to add HTML controls so the page can be found.
{\n \"@id\": \"urn:solid-server:default:AccountHtmlControlHandler\",\n \"@type\": \"ControlHandler\",\n \"controls\": [{\n \"ControlHandler:_controls_key\": \"linkWebId\",\n \"ControlHandler:_controls_value\": { \"@id\": \"urn:solid-server:default:AccountWebIdRoute\" }\n }]\n}\n"},{"location":"architecture/features/protocol/authorization/","title":"Authorization","text":"flowchart TD\n AuthorizingHttpHandler(\"<br>AuthorizingHttpHandler\")\n AuthorizingHttpHandler --> AuthorizingHttpHandlerArgs\n\n subgraph AuthorizingHttpHandlerArgs[\" \"]\n CredentialsExtractor(\"<strong>CredentialsExtractor</strong><br><i>CredentialsExtractor</i>\")\n ModesExtractor(\"<strong>ModesExtractor</strong><br><i>ModesExtractor</i>\")\n PermissionReader(\"<strong>PermissionReader</strong><br><i>PermissionReader</i>\")\n Authorizer(\"<strong>Authorizer</strong><br>PermissionBasedAuthorizer\")\n OperationHttpHandler(\"<br><i>OperationHttpHandler</i>\")\n end Authorization is usually handled by the AuthorizingHttpHandler, which receives a parsed HTTP request in the form of an Operation. It goes through the following steps:
CredentialsExtractor identifies the credentials of the agent making the call.ModesExtractor finds which access modes are needed for which resources.PermissionReader determines the permissions the agent has on the targeted resources.Authorizer.OperationHttpHandler, otherwise throw an error.There are multiple CredentialsExtractors that each determine identity in a different way. Potentially multiple extractors can apply, making a requesting agent have multiple credentials.
The diagram below shows the default configuration if authentication is enabled.
flowchart TD\n CredentialsExtractor(\"<strong>CredentialsExtractor</strong><br>UnionCredentialsExtractor\")\n CredentialsExtractor --> CredentialsExtractorArgs\n\n subgraph CredentialsExtractorArgs[\" \"]\n WaterfallHandler(\"<br>WaterfallHandler\")\n PublicCredentialsExtractor(\"<br>PublicCredentialsExtractor\")\n end\n\n WaterfallHandler --> WaterfallHandlerArgs\n subgraph WaterfallHandlerArgs[\" \"]\n direction LR\n DPoPWebIdExtractor(\"<br>DPoPWebIdExtractor\") --> BearerWebIdExtractor(\"<br>BearerWebIdExtractor\")\n end Both of the WebID extractors make use of the access-token-verifier library to parse incoming tokens based on the Solid-OIDC specification. All these credentials then get combined into a single union object.
If successful, a CredentialsExtractor will return an object containing all the information extracted, such as the WebID of the agent, or the issuer of the token.
There are also debug configuration options available that can be used to simulate credentials. These can be enabled as different options through the config/ldp/authentication imports.
Access modes are a predefined list of read, write, append, create and delete. The ModesExtractor determine which modes will be necessary and for which resources, based on the request contents.
flowchart TD\n ModesExtractor(\"<strong>ModesExtractor</strong><br>IntermediateCreateExtractor\")\n ModesExtractor --> HttpModesExtractor(\"<strong>HttpModesExtractor</strong><br>WaterfallHandler\")\n\n HttpModesExtractor --> HttpModesExtractorArgs\n\n subgraph HttpModesExtractorArgs[\" \"]\n direction LR\n PatchModesExtractor(\"<strong>PatchModesExtractor</strong><br><i>ModesExtractor</i>\") --> MethodModesExtractor(\"<br>MethodModesExtractor\")\n end The IntermediateCreateExtractor is responsible if requests try to create intermediate containers with a single request. E.g., a PUT request to /foo/bar/baz should create both the /foo/ and /foo/bar/ containers in case they do not exist yet. This extractor makes sure that create permissions are also checked on those containers.
Modes can usually be determined based on just the HTTP methods, which is what the MethodModesExtractor does. A GET request will always need the read mode for example.
The only exception are PATCH requests, where the necessary modes depend on the body and the PATCH type.
flowchart TD\n PatchModesExtractor(\"<strong>PatchModesExtractor</strong><br>WaterfallHandler\") --> PatchModesExtractorArgs\n subgraph PatchModesExtractorArgs[\" \"]\n N3PatchModesExtractor(\"<br>N3PatchModesExtractor\")\n SparqlUpdateModesExtractor(\"<br>SparqlUpdateModesExtractor\")\n end The server supports both N3 Patch and SPARQL Update PATCH requests. In both cases it will parse the bodies to determine what the impact would be of the request and what modes it requires.
"},{"location":"architecture/features/protocol/authorization/#permission-reading","title":"Permission reading","text":"PermissionReaders take the input of the above to determine which permissions are available. The modes from the previous step are not yet needed, but can be used as optimization as we only need to know if we have permission on those modes. Each reader returns all the information it can find based on the resources and modes it receives. In most of the default configuration the following readers are combined when WebACL is enabled as authorization method. In case authorization is disabled by changing the authorization import to config/ldp/authorization/allow-all.json, the diagram would be a single class that always returns all permissions.
flowchart TD\n PermissionReader(\"<strong>PermissionReader</strong><br>AuxiliaryReader\")\n PermissionReader --> UnionPermissionReader(\"<br>UnionPermissionReader\")\n UnionPermissionReader --> UnionPermissionReaderArgs\n\n subgraph UnionPermissionReaderArgs[\" \"]\n PathBasedReader(\"<strong>PathBasedReader</strong><br>PathBasedReader\")\n OwnerPermissionReader(\"<strong>OwnerPermissionReader</strong><br>OwnerPermissionReader\")\n WrappedWebAclReader(\"<strong>WrappedWebAclReader</strong><br>ParentContainerReader\")\n end\n\n WrappedWebAclReader --> WebAclAuxiliaryReader(\"<strong>WebAclAuxiliaryReader</strong><br>AuthAuxiliaryReader\")\n WebAclAuxiliaryReader --> WebAclReader(\"<strong>WebAclReader</strong><br>WebAclReader\") The first thing that happens is that if the target is an auxiliary resource that uses the authorization of its subject resource, the AuxiliaryReader inserts that identifier instead. An example of this is if the requests targets the metadata of a resource.
The UnionPermissionReader then combines the results of its readers into a single permission object. If one reader rejects a specific mode and another allows it, the rejection takes priority.
The PathBasedReader rejects all permissions for certain paths. This is used to prevent access to the internal data of the server.
The OwnerPermissionReader makes sure owners always have control access to the pods they created on the server. Users will always be able to modify the ACL resources in their pod, even if they accidentally removed their own access.
The final readers are specifically relevant for the WebACL algorithm. The ParentContainerReader checks the permissions on a parent resource if required: creating a resource requires append permissions on the parent container, while deleting a resource requires write permissions there.
In case the target is an ACL resource, control permissions need to be checked, no matter what mode was generated by the ModesExtractor. The AuthAuxiliaryReader makes sure this conversion happens.
Finally, the WebAclReader implements the efffective ACL resource algorithm and returns the permissions it finds in that resource. In case no ACL resource is found this indicates a configuration error and no permissions will be granted.
It is also possible to use ACP as authorization method instead of WebACL. In that case the diagram is very similar, except the AuthAuxiliaryReader is configured for Access Control Resources, and it points to a AcpReader instead.
All the results of the previous steps then get combined in the PermissionBasedAuthorizer to either allow or reject a request. If no permissions are found for a requested mode, or they are explicitly forbidden, a 401/403 will be returned, depending on if the agent was logged in or not.
The LdpHandler, named as a reference to the Linked Data Platform specification, chains several handlers together, each with their own specific purpose, to fully resolve the HTTP request. It specifically handles Solid requests as described in the protocol specification, e.g. a POST request to create a new resource.
Below is a simplified view of how these handlers are linked.
flowchart LR\n LdpHandler(\"<strong>LdpHandler</strong><br>ParsingHttpHandler\")\n LdpHandler --> AuthorizingHttpHandler(\"<br>AuthorizingHttpHandler\")\n AuthorizingHttpHandler --> OperationHandler(\"<strong>OperationHandler</strong><br><i>OperationHandler</i>\")\n OperationHandler --> ResourceStore(\"<strong>ResourceStore</strong><br><i>ResourceStore</i>\") A standard request would go through the following steps:
ParsingHttphandler parses the HTTP request into a manageable format, both body and metadata such as headers.AuthorizingHttpHandler verifies if the request is authorized to access the targeted resource.OperationHandler determines which action is required based on the HTTP method.ResourceStore does all the relevant data work.ParsingHttphandler eventually receives the response data, or an error, and handles the output.Below are sections that go deeper into the specific steps.
ResourceStore looks likeflowchart TD\n ParsingHttphandler(\"<br>ParsingHttphandler\")\n ParsingHttphandler --> ParsingHttphandlerArgs\n\n subgraph ParsingHttphandlerArgs[\" \"]\n RequestParser(\"<strong>RequestParser</strong><br>BasicRequestParser\")\n AuthorizingHttpHandler(\"<strong></strong><br>AuthorizingHttpHandler\")\n ErrorHandler(\"<strong>ErrorHandler</strong><br><i>ErrorHandler</i>\")\n ResponseWriter(\"<strong>ResponseWriter</strong><br>BasicResponseWriter\")\n end A ParsingHttpHandler handles both the parsing of the input data, and the serializing of the output data. It follows these 3 steps:
RequestParser to convert the incoming data into an Operation.Operation to the AuthorizingHttpHandler to receive either a Representation if the operation was a success, or an Error in case something went wrong.ErrorHandler will convert the Error into a ResponseDescription.ResponseWriter to output the ResponseDescription as an HTTP response.flowchart TD\n RequestParser(\"<strong>RequestParser</strong><br>BasicRequestParser\") --> RequestParserArgs\n subgraph RequestParserArgs[\" \"]\n TargetExtractor(\"<strong>TargetExtractor</strong><br>OriginalUrlExtractor\")\n PreferenceParser(\"<strong>PreferenceParser</strong><br>AcceptPreferenceParser\")\n MetadataParser(\"<strong>MetadataParser</strong><br><i>MetadataParser</i>\")\n BodyParser(\"<br><i>Bodyparser</i>\")\n Conditions(\"<br>BasicConditionsParser\")\n end\n\n OriginalUrlExtractor --> IdentifierStrategy(\"<strong>IdentifierStrategy</strong><br><i>IdentifierStrategy</i>\") The BasicRequestParser is mostly an aggregator of multiple smaller parsers that each handle a very specific part.
This is a single class, the OriginalUrlExtractor, but fulfills the very important role of making sure input URLs are handled consistently.
The query parameters will always be completely removed from the URL.
There is also an algorithm to make sure all URLs have a \"canonical\" version as for example both & and %26 can be interpreted in the same way. Specifically all special characters will be encoded into their percent encoding.
The IdentifierStrategy it gets as input is used to determine if the resulting URL is within the scope of the server. This can differ depending on if the server uses subdomains or not.
The resulting identifier will be stored in the target field of an Operation object.
The AcceptPreferenceParser parses the Accept header and all the relevant Accept-* headers. These will all be put into the preferences field of an Operation object. These will later be used to handle the content negotiation.
For example, when sending an Accept: text/turtle; q=0.9 header, this wil result in the preferences object { type: { 'text/turtle': 0.9 } }.
Several other headers can have relevant metadata, such as the Content-Type header, or the Link: <http://www.w3.org/ns/ldp#Container>; rel=\"type\" header which is used to indicate to the server that a request intends to create a container.
Such headers are converted to RDF triples and stored in the RepresentationMetadata object, which will be part of the body field in the Operation.
The default MetadataParser is a ParallelHandler that contains several smaller parsers, each looking at a specific header.
In case of most requests, the input data stream is used directly in the body field of the Operation, with a few minor checks to make sure the HTTP specification is being followed.
In the case of PATCH requests though, there are several specific body parsers that will convert the request into a JavaScript object containing all the necessary information to execute such a PATCH. Several validation checks will already take place there as well.
"},{"location":"architecture/features/protocol/parsing/#conditions","title":"Conditions","text":"The BasicConditionsParser parses everything related to conditions headers, such as if-none-match or if-modified-since, and stores the relevant information in the conditions field of the Operation. These will later be used to make sure the request should be aborted or not.
In case a request is successful, the AuthorizingHttpHandler will return a ResponseDescription, and if not it will throw an error.
In case an error gets thrown, this will be caught by the ErrorHandler and converted into a ResponseDescription. The request preferences will be used to make sure the serialization is one that is preferred.
Either way we will have a ResponseDescription, which will be sent to the BasicResponseWriter to convert into output headers, data and a status code.
To convert the metadata into headers, it uses a MetadataWriter, which functions as the reverse of the MetadataParser mentioned above: it has multiple writers which each convert certain metadata into a specific header.
As described here, there is a generic solution for modifying resources as a result of PATCH requests. It consists of doing the following steps:
The architecture is described more in-depth below.
flowchart LR\n PatchingStore(\"<strong>ResourceStore_Patching</strong><br>ResourceStore\")\n PatchingStore --> PatchHandler(\"<strong>PatchHandler</strong><br>RepresentationPatchHandler\")\n PatchHandler --> Patchers(\"<br>WaterfallHandler\")\n Patchers --> ConvertingPatcher(\"<br>ConvertingPatcher\")\n ConvertingPatcher --> RdfPatcher(\"<strong>RdfPatcher</strong><br>RdfPatcher\") flowchart LR\n RdfPatcher(\"<strong>RdfPatcher</strong><br>RdfPatcher\")\n RdfPatcher --> RDFStore(\"<strong>PatchHandler_RDFStore</strong><br>WaterfallHandler\")\n RDFStore --> RDFStoreArgs\n\n subgraph RDFStoreArgs[\" \"]\n Immutable(\"<strong>PatchHandler_ImmutableMetadata</strong><br>ImmutableMetadataPatcher\")\n RDF(\"<strong>PatchHandler_RDF</strong><br>WaterfallHandler\")\n Immutable --> RDF\n end\n\n RDF --> RDFArgs\n\n subgraph RDFArgs[\" \"]\n direction LR\n N3(\"<br>N3Patcher\")\n SPARQL(\"<br>SparqlUpdatePatcher\")\n end The PatchingStore is the entry point. It first checks whether the next store supports modifying resources. Only if this is not the case will it start the generic patching solution by calling its PatchHandler.
The RepresentationPatchHandler calls the source ResourceStore to get a data stream representing the current state of the resource. It feeds that stream as input into a RepresentationPatcher, and then writes the result back to the store.
Similarly to the way accessing resources is done through a stack of ResourceStores, patching is done through a stack of RepresentationPatchers, each performing a step in the patching process.
The ConvertingPatcher is responsible for converting the original resource to a stream of quad objects, and converting the modified result back to the original type. By converting to quads, all other relevant classes can act independently of the actual RDF serialization type. For similar reasons, the RdfPatcher converts the quad stream to an N3.js Store so the next patchers do not have to worry about handling stream data and have access to the entire resource in memory.
The ImmutableMetadataPatcher keeps track of a list of triples that cannot be modified in the metadata of a resource. For example, it is not possible to modify the metadata to indicate whether it is a storage root. The ImmutableMetadataPatcher tracks all these triples before and after a metadata resource is modified, and throws an error if one is modified. If the target resource is not metadata but a standard resource, this class will be skipped.
Finally, either the N3Patcher or the SparqlUpdatePatcher will be called, depending on the type of patch that is requested.
The interface of a ResourceStore is mostly a 1-to-1 mapping of the HTTP methods:
getRepresentationsetRepresentationaddResourcedeleteResourcemodifyResourceThe corresponding OperationHandler of the relevant method is responsible for calling the correct ResourceStore function.
In practice, the community server has multiple resource stores chained together, each handling a specific part of the request and then calling the next store in the chain. The default configurations come with the following stores:
MonitoringStoreIndexRepresentationStoreLockingResourceStorePatchingStoreRepresentationConvertingStoreDataAccessorBasedStoreThis chain can be seen in the configuration part in config/storage/middleware/default.json and all the entries in config/storage/backend.
This store emits the events that are necessary to emit notifications when resources change.
There are 4 different events that can be emitted:
this.emit('changed', identifier, activity): is emitted for every resource that was changed/effected by a call to the store. With activity being undefined or one of the available ActivityStream terms.this.emit(AS.Create, identifier): is emitted for every resource that was created by the call to the store.this.emit(AS.Update, identifier): is emitted for every resource that was updated by the call to the store.this.emit(AS.Delete, identifier): is emitted for every resource that was deleted by the call to the store.A changed event will always be emitted if a resource was changed. If the correct metadata was set by the source ResourceStore, an additional field will be sent along indicating the type of change, and an additional corresponding event will be emitted, depending on what the change is.
When doing a GET request on a container /container/, this container returns the contents of /container/index.html instead if HTML is the preferred response type. All these values are the defaults and can be configured for other resources and media types.
To prevent data corruption, the server locks resources when being targeted by a request. Locks are only released when an operation is completely finished, in the case of read operations this means the entire data stream is read, and in the case of write operations this happens when all relevant data is written. The default lock that is used is a readers-writer lock. This allows simultaneous read requests on the same resource, but only while no write request is in progress.
"},{"location":"architecture/features/protocol/resource-store/#patchingstore","title":"PatchingStore","text":"PATCH operations in Solid apply certain transformations on the target resource, which makes them more complicated than only reading or writing data since it involves both. The PatchingStore provides a generic solution for backends that do not implement the modifyResource function so new backends can be added more easily. In case the next store in the chain does not support PATCH, the PatchingStore will GET the data from the next store, apply the transformation on that data, and then PUT it back to the store.
This store handles everything related to content negotiation. In case the resulting data of a GET request does not match the preferences of a request, it will be converted here. Similarly, if incoming data does not match the type expected by the store, the SPARQL backend only accepts triples for example, that is also handled here
"},{"location":"architecture/features/protocol/resource-store/#dataaccessorbasedstore","title":"DataAccessorBasedStore","text":"Large parts of the requirements of the Solid protocol specification are resolved by the DataAccessorBasedStore: POST only working on containers, DELETE not working on non-empty containers, generating ldp:contains triples for containers, etc. Most of this behaviour is independent of how the data is stored which is why it can be generalized here. The store's name comes from the fact that it makes use of DataAccessors to handle the read/write of resources. A DataAccessor is a simple interface that only focuses on handling the data. It does not concern itself with any of the necessary Solid checks as it assumes those have already been made. This means that if a storage method needs to be supported, only a new DataAccessor needs to be made, after which it can be plugged into the rest of the server.
The community server is fully written in Typescript.
All changes should be done through pull requests.
We recommend first discussing a possible solution in the relevant issue to reduce the amount of changes that will be requested.
In case any of your changes are breaking, make sure you target the next major branch (versions/x.0.0) instead of the main branch. Breaking changes include: changing interface/class signatures, potentially breaking external custom configurations, and breaking how internal data is stored. In case of doubt you probably want to target the next major branch.
We make use of Conventional Commits.
Don't forget to update the release notes when adding new major features. Also update any relevant documentation in case this is needed.
When making changes to a pull request, we prefer to update the existing commits with a rebase instead of appending new commits, this way the PR can be rebased directly onto the target branch instead of needing to be squashed.
There are strict requirements from the linter and the test coverage before a PR is valid. These are configured to run automatically when trying to commit to git. Although there are no tests for it (yet), we strongly advice documenting with TSdoc.
If a list of entries is alphabetically sorted, such as index.ts, make sure it stays that way.
"},{"location":"contributing/release/","title":"Releasing a new major version","text":"This is only relevant if you are a developer with push access responsible for doing a new release.
Steps to follow:
main into versions/next-major.RELEASE_NOTES.md are correct.npm run release -- -r majorchore(release): Update configs to vx.0.0.package.json, and generates the new entries in CHANGELOG.md. Commits with chore(release): Release version vx.0.0 of the npm packagenpx commit-and-tag-version -r major --dry-run to preview the commands that will be run and the changes to CHANGELOG.md.postrelease script will now prompt you to manually edit the CHANGELOG.md.postrelease script will amend the release commit, create an annotated tag and push changes to origin.versions/next-major into main and push.npm publishnpm dist-tag add @solid/community-server@x.0.0 nextversions/x.0.0 branch to the next version.npm run release -- -r major --prerelease alphaversions/next-major into main.npm publish --tag next.npm run release -- -r minorversions/next-major into main.One potential issue for scripts and other applications is that it requires user interaction to log in and authenticate. The CSS offers an alternative solution for such cases by making use of Client Credentials. Once you have created an account as described in the Identity Provider section, users can request a token that apps can use to authenticate without user input.
All requests to the client credentials API currently require you to send along the email and password of that account to identify yourself. This is a temporary solution until the server has more advanced account management, after which this API will change.
Below is example code of how to make use of these tokens. It makes use of several utility functions from the Solid Authentication Client. Note that the code below uses top-level await, which not all JavaScript engines support, so this should all be contained in an async function.
A token can be created either on your account page, by default http://localhost:3000/.account/, or by calling the relevant API.
Below is an example of how to call the API to generate such a token.
The code below generates a token linked to your account and WebID. This only needs to be done once, afterwards this token can be used for all future requests.
Before doing the step below, you already need to have an authorization value that you get after logging in to your account. In the example below the cookie value is used. In the default server configurations, you can log in through the email/password API.
// This assumes your server is started under http://localhost:3000/.\n// It also assumes you have already logged in and `cookie` contains a valid cookie header\n// as described in the API documentation.\nconst indexResponse = await fetch('http://localhost:3000/.account/', { headers: { cookie }});\nconst { controls } = await indexResponse.json();\nconst response = await fetch(controls.account.clientCredentials, {\n method: 'POST',\n headers: { cookie, 'content-type': 'application/json' },\n // The name field will be used when generating the ID of your token.\n // The WebID field determines which WebID you will identify as when using the token.\n // Only WebIDs linked to your account can be used.\n body: JSON.stringify({ name: 'my-token', webId: 'http://localhost:3000/my-pod/card#me' }),\n});\n\n// These are the identifier and secret of your token.\n// Store the secret somewhere safe as there is no way to request it again from the server!\n// The `resource` value can be used to delete the token at a later point in time.\nconst { id, secret, resource } = await response.json();\n In case something goes wrong the status code will be 400/500 and the response body will contain a description of the problem.
"},{"location":"usage/client-credentials/#requesting-an-access-token","title":"Requesting an Access token","text":"The ID and secret combination generated above can be used to request an Access Token from the server. This Access Token is only valid for a certain amount of time, after which a new one needs to be requested.
import { createDpopHeader, generateDpopKeyPair } from '@inrupt/solid-client-authn-core';\nimport fetch from 'node-fetch';\n\n// A key pair is needed for encryption.\n// This function from `solid-client-authn` generates such a pair for you.\nconst dpopKey = await generateDpopKeyPair();\n\n// These are the ID and secret generated in the previous step.\n// Both the ID and the secret need to be form-encoded.\nconst authString = `${encodeURIComponent(id)}:${encodeURIComponent(secret)}`;\n// This URL can be found by looking at the \"token_endpoint\" field at\n// http://localhost:3000/.well-known/openid-configuration\n// if your server is hosted at http://localhost:3000/.\nconst tokenUrl = 'http://localhost:3000/.oidc/token';\nconst response = await fetch(tokenUrl, {\n method: 'POST',\n headers: {\n // The header needs to be in base64 encoding.\n authorization: `Basic ${Buffer.from(authString).toString('base64')}`,\n 'content-type': 'application/x-www-form-urlencoded',\n dpop: await createDpopHeader(tokenUrl, 'POST', dpopKey),\n },\n body: 'grant_type=client_credentials&scope=webid',\n});\n\n// This is the Access token that will be used to do an authenticated request to the server.\n// The JSON also contains an \"expires_in\" field in seconds,\n// which you can use to know when you need request a new Access token.\nconst { access_token: accessToken } = await response.json();\n"},{"location":"usage/client-credentials/#using-the-access-token-to-make-an-authenticated-request","title":"Using the Access token to make an authenticated request","text":"Once you have an Access token, you can use it for authenticated requests until it expires.
import { buildAuthenticatedFetch } from '@inrupt/solid-client-authn-core';\nimport fetch from 'node-fetch';\n\n// The DPoP key needs to be the same key as the one used in the previous step.\n// The Access token is the one generated in the previous step.\nconst authFetch = await buildAuthenticatedFetch(fetch, accessToken, { dpopKey });\n// authFetch can now be used as a standard fetch function that will authenticate as your WebID.\n// This request will do a simple GET for example.\nconst response = await authFetch('http://localhost:3000/private');\n"},{"location":"usage/client-credentials/#other-token-actions","title":"Other token actions","text":"You can see all your existing tokens on your account page or by doing a GET request to the same API to create a new token. The details of a token can be seen by doing a GET request to the resource URL of the token.
A token can be deleted by doing a DELETE request to the resource URL of the token.
All of these actions require you to be logged in to the account.
"},{"location":"usage/dev-configuration/","title":"Configuring the CSS as a development server in another project","text":"It can be useful to use the CSS as local server to develop Solid applications against. As an alternative to using CLI arguments, or environment variables, the CSS can be configured in the package.json as follows:
{\n \"name\": \"test\",\n \"version\": \"0.0.0\",\n \"private\": \"true\",\n \"config\": {\n \"community-solid-server\": {\n \"port\": 3001,\n \"loggingLevel\": \"error\"\n }\n },\n \"scripts\": {\n \"dev:pod\": \"community-solid-server\"\n },\n \"devDependencies\": {\n \"@solid/community-server\": \"^7.0.0\"\n }\n}\n These parameters will then be used when the community-solid-server command is executed as an npm script (as shown in the example above). Or whenever the community-solid-server command is executed in the same folder as the package.json.
Alternatively, the configuration parameters may be placed in a configuration file named .community-solid-server.config.json as follows:
{\n \"port\": 3001,\n \"loggingLevel\": \"error\"\n}\n The config may also be written in JavaScript with the config as the default export such as the following .community-solid-server.config.js:
module.exports = {\n port: 3001,\n loggingLevel: 'error'\n};\n"},{"location":"usage/example-requests/","title":"Interacting with the server","text":""},{"location":"usage/example-requests/#put-creating-resources-for-a-given-url","title":"PUT: Creating resources for a given URL","text":"Create a plain text file:
curl -X PUT -H \"Content-Type: text/plain\" \\\n -d \"abc\" \\\n http://localhost:3000/myfile.txt\n Create a turtle file:
curl -X PUT -H \"Content-Type: text/turtle\" \\\n -d \"<ex:s> <ex:p> <ex:o>.\" \\\n http://localhost:3000/myfile.ttl\n"},{"location":"usage/example-requests/#post-creating-resources-at-a-generated-url","title":"POST: Creating resources at a generated URL","text":"Create a plain text file:
curl -X POST -H \"Content-Type: text/plain\" \\\n -d \"abc\" \\\n http://localhost:3000/\n Create a turtle file:
curl -X POST -H \"Content-Type: text/turtle\" \\\n -d \"<ex:s> <ex:p> <ex:o>.\" \\\n http://localhost:3000/\n The response's Location header will contain the URL of the created resource.
GET: Retrieving resources","text":"Retrieve a plain text file:
curl -H \"Accept: text/plain\" \\\n http://localhost:3000/myfile.txt\n Retrieve a turtle file:
curl -H \"Accept: text/turtle\" \\\n http://localhost:3000/myfile.ttl\n Retrieve a turtle file in a different serialization:
curl -H \"Accept: application/ld+json\" \\\n http://localhost:3000/myfile.ttl\n"},{"location":"usage/example-requests/#delete-deleting-resources","title":"DELETE: Deleting resources","text":"curl -X DELETE http://localhost:3000/myfile.txt\n"},{"location":"usage/example-requests/#patch-modifying-resources","title":"PATCH: Modifying resources","text":"Modify a resource using N3 Patch:
curl -X PATCH -H \"Content-Type: text/n3\" \\\n --data-raw \"@prefix solid: <http://www.w3.org/ns/solid/terms#>. _:rename a solid:InsertDeletePatch; solid:inserts { <ex:s2> <ex:p2> <ex:o2>. }.\" \\\n http://localhost:3000/myfile.ttl\n Modify a resource using SPARQL Update:
curl -X PATCH -H \"Content-Type: application/sparql-update\" \\\n -d \"INSERT DATA { <ex:s2> <ex:p2> <ex:o2> }\" \\\n http://localhost:3000/myfile.ttl\n"},{"location":"usage/example-requests/#head-retrieve-resources-headers","title":"HEAD: Retrieve resources headers","text":"curl -I -H \"Accept: text/plain\" \\\n http://localhost:3000/myfile.txt\n"},{"location":"usage/example-requests/#options-retrieve-resources-communication-options","title":"OPTIONS: Retrieve resources communication options","text":"curl -X OPTIONS -i http://localhost:3000/myfile.txt\n"},{"location":"usage/identity-provider/","title":"Identity Provider","text":"Besides implementing the Solid protocol, the community server can also be an Identity Provider (IDP), officially known as an OpenID Provider (OP), following the Solid-OIDC specification as much as possible.
It is recommended to use the latest version of the Solid authentication client to interact with the server.
It also provides account management options for creating pods and WebIDs to be used during authentication, which are discussed more in-depth below. The links on this page assume the server is hosted at http://localhost:3000/.
To register an account, you can go to http://localhost:3000/.account/password/register/, if this feature is enabled. There you can create an account with the email/password login method. The password will be salted and hashed before being stored. Afterwards you will be redirected to the account page where you can create pods and link WebIDs to your account.
To create a pod you simply have to fill in the name you want your pod to have. This will then be used to generate the full URL of your pod. For example, if you choose the name test, your pod would be located at http://localhost:3000/test/ and your generated WebID would be http://localhost:3000/test/profile/card#me.
If you fill in a WebID when creating the pod, that WebID will be the one that has access to all data in the pod. If you don't, a WebID will be created in the pod and immediately linked to your account, allowing you to use it for authentication and accessing the data in that pod
The generated name also depends on the configuration you chose for your server. If you are using the subdomain feature, the generated pod URL would be http://test.localhost:3000/.
To use Solid authentication, you need to link at least one WebID to your account. This can happen automatically when creating a pod as mentioned above, or can be done manually with external WebIDs.
If you try to link an external WebID, the first attempt will return an error indicating you need to add an identification triple to your WebID. After doing that you can try to register again. This is how we verify you are the owner of that WebID. Afterwards the page will inform you that you have to add a triple to your WebID if you want to use the server as your IDP.
"},{"location":"usage/identity-provider/#logging-in","title":"Logging in","text":"When using an authenticating client, you will be redirected to a login screen asking for your email and password. After that you will be redirected to a page showing some basic information about the client where you can pick the WebID you want to use. There you need to consent that this client is allowed to identify using that WebID. As a result the server will send a token back to the client that contains all the information needed to use your WebID.
"},{"location":"usage/identity-provider/#forgot-password","title":"Forgot password","text":"If you forgot your password, you can recover it by going to http://localhost:3000/.account/login/password/forgot/. There you can enter your email address to get a recovery mail to reset your password. This feature only works if a mail server was configured, which by default is not the case.
All of the above happens through HTML pages provided by the server. By default, the server uses the templates found in /templates/identity/ but different templates can be used through configuration.
These templates all make use of a JSON API exposed by the server. A full description of this API can be found here.
"},{"location":"usage/identity-provider/#idp-configuration","title":"IDP configuration","text":"The above descriptions cover server behaviour with most default configurations, but just like any other feature, there are several features that can be changed through the imports in your configuration file.
All available options can be found in the config/identity/ folder. Below we go a bit deeper into the available options
The access option allows you to set authorization restrictions on the IDP API when enabled, similar to how authorization works on the LDP requests on the server. For example, if the server uses WebACL as authorization scheme, you can put a .acl resource in the /.account/account/ container to restrict who is allowed to access the account creation API. Note that for everything to work there needs to be a .acl resource in /.account/ when using WebACL so resources can be accessed as usual when the server starts up. Make sure you change the permissions on /.account/.acl so not everyone can modify those.
All of the above is only relevant if you use the restricted.json setting for this import. When you use public.json the API will simply always be accessible by everyone.
In case you want users to be able to reset their password when they forget it, you will need to tell the server which email server to use to send reset mails. example.json contains an example of what this looks like. When using this import, you can override the values with those of your own mail client by adding the following to your Components.js configuration with updated values:
{\n \"comment\": \"The settings of your email server.\",\n \"@type\": \"Override\",\n \"overrideInstance\": {\n \"@id\": \"urn:solid-server:default:EmailSender\"\n },\n \"overrideParameters\": {\n \"@type\": \"BaseEmailSender\",\n \"senderName\": \"Community Solid Server <solid@example.email>\",\n \"emailConfig_host\": \"smtp.example.email\",\n \"emailConfig_port\": 587,\n \"emailConfig_auth_user\": \"solid@example.email\",\n \"emailConfig_auth_pass\": \"NYEaCsqV7aVStRCbmC\"\n }\n}\n"},{"location":"usage/identity-provider/#handler","title":"handler","text":"Here you determine which features of account management are available. default.json allows everything, disabled.json completely disables account management, and the other options disable account and/or pod creation.
The pod options determines how pods are created. static.json is the expected pod behaviour as described above. dynamic.json is an experimental feature that allows users to have a custom Components.js configuration for their own pod. When using such a configuration, a JSON file will be written containing all the information of the user pods, so they can be recreated when the server restarts.
Due to its modular nature, it is possible to add new login methods to the server, allowing users to log in different ways than just the standard email/password combination. More information on what is required can be found here.
"},{"location":"usage/identity-provider/#data-migration","title":"Data migration","text":"Going from v6 to v7 of the server, the account management is completely rewritten, including how account data is stored on the server. More information about how account data of an existing server can be migrated to the newer version can be found here.
"},{"location":"usage/metadata/","title":"Editing metadata of resources","text":""},{"location":"usage/metadata/#what-is-a-description-resource","title":"What is a description resource","text":"Description resources contain auxiliary information about a resource. In CSS, these represent metadata corresponding to that resource. Every resource always has a corresponding description resource and therefore description resources can not be created or deleted directly.
Description resources are discoverable by interacting with their subject resource: the response to a GET or HEAD request on a subject resource will contain a describedby Link Header with a URL that points to its description resource.
Clients should always follow this link rather than guessing its URL, because the Solid Protocol does not mandate a specific description resource URL. The default CSS configurations use as a convention that http://example.org/resource has http://example.org/resource.meta as its description resource.
Editing the metadata of a resource is performed by editing the description resource directly. This can only be done using PATCH requests (see example workflow).
PUT requests on description resources are not allowed, because they would replace the entire resource state, whereas some metadata is protected or generated by the server.
Similarly, DELETE on description resources is not allowed because a resource will always have some metadata (e.g. rdf:type). Instead, the lifecycle of description resources is managed by the server.
Some metadata is managed by the server and can not be modified directly, such as the last modified date. The CSS will throw an error (409 ConflictHttpError) when trying to change this protected metadata.
PUT requests on a resource will reset the description resource. There is however a way to keep the contents of description resource prior to the PUT request: adding the HTTP Link header targeting the description resource with rel=\"preserve\".
When the resource URL is http://localhost:3000/foobar, preserving its description resource when updating its contents can be achieved like in the following example:
curl -X PUT 'http://localhost:3000/foobar' \\\n-H 'Content-Type: text/turtle' \\\n-H 'Link: <http://localhost:3000/foobar.meta>;rel=\"preserve\"' \\\n-d \"<ex:s> <ex:p> <ex:o>.\"\n"},{"location":"usage/metadata/#impact-on-creating-containers","title":"Impact on creating containers","text":"When creating a container the input body is ignored and performing a PUT request on an existing container will result in an error. Container metadata can only be added and modified by performing a PATCH on the description resource, similarly to documents. This is done to clearly differentiate between a container's representation and its metadata.
In this example, we add an inbox description to http://localhost:3000/foo/. This allows discovery of the ldp:inbox as described in the Linked Data Notifications specification.
We have started the CSS with the default configuration and have already created an inbox at http://localhost:3000/inbox/.
Since we don't know the location of the description resource, we first send a HEAD request to the resource to obtain the URL of its description resource.
curl --head 'http://localhost:3000/foo/'\n which will produce a response with at least these headers:
HTTP/1.1 200 OK\nLink: <http://localhost:3000/foo/.meta>; rel=\"describedby\"\n Now that we have the URL of the description resource, we create a patch for adding the inbox in the description of the resource.
curl -X PATCH 'http://localhost:3000/foo/.meta' \\\n-H 'Content-Type: text/n3' \\\n--data-raw '@prefix solid: <http://www.w3.org/ns/solid/terms#>.\n<> a solid:InsertDeletePatch;\nsolid:inserts { <http://localhost:3000/foo/> <http://www.w3.org/ns/ldp#inbox> <http://localhost:3000/inbox/>. }.'\n After this update, we can verify that the inbox is added by performing a GET request to the description resource
curl 'http://localhost:3000/foo/.meta'\n With as result for the body
@prefix dc: <http://purl.org/dc/terms/>.\n@prefix ldp: <http://www.w3.org/ns/ldp#>.\n@prefix posix: <http://www.w3.org/ns/posix/stat#>.\n@prefix xsd: <http://www.w3.org/2001/XMLSchema#>.\n\n<http://localhost:3000/foo/> a ldp:Container, ldp:BasicContainer, ldp:Resource;\n dc:modified \"2022-06-09T08:17:07.000Z\"^^xsd:dateTime;\n ldp:inbox <http://localhost:3000/inbox/>;.\n This can also be verified by sending a GET request to the subject resource itself. The inbox location can also be found in the Link headers.
curl -v 'http://localhost:3000/foo/'\n HTTP/1.1 200 OK\nLink: <http://localhost:3000/inbox/>; rel=\"http://www.w3.org/ns/ldp#inbox\"\n"},{"location":"usage/notifications/","title":"Receiving notifications","text":"A CSS instance can be configured to support Solid notifications. These can be used to track changes on the server. There are no specific requirements on the type of notifications a Solid server should support, so on this page we'll describe the notification types supported by CSS, and how to make use of the different ways supported to receive notifications.
"},{"location":"usage/notifications/#discovering-subscription-services","title":"Discovering subscription services","text":"CSS only supports discovering the notification subscription services through the storage description resource. This can be found by doing a HEAD request on any resource in your pod and looking for the Link header with the http://www.w3.org/ns/solid/terms#storageDescription relationship.
For example, when hosting the server on localhost with port 3000, the result is:
Link: <http://localhost:3000/.well-known/solid>; rel=\"http://www.w3.org/ns/solid/terms#storageDescription\"\n Doing a GET to http://localhost:3000/.well-known/solid then gives the following result (simplified for readability):
@prefix notify: <http://www.w3.org/ns/solid/notifications#>.\n\n<http://localhost:3000/.well-known/solid>\n a <http://www.w3.org/ns/pim/space#Storage> ;\n notify:subscription <http://localhost:3000/.notifications/WebSocketChannel2023/> ,\n <http://localhost:3000/.notifications/WebhookChannel2023/> .\n<http://localhost:3000/.notifications/WebSocketChannel2023/>\n notify:channelType notify:WebSocketChannel2023 ;\n notify:feature notify:accept ,\n notify:endAt ,\n notify:rate ,\n notify:startAt ,\n notify:state .\n<http://localhost:3000/.notifications/WebhookChannel2023/>\n notify:channelType notify:WebhookChannel2023;\n notify:feature notify:accept ,\n notify:endAt ,\n notify:rate ,\n notify:startAt ,\n notify:state .\n This says that there are two available subscription services that can be used for notifications and where to find them. Note that these discovery requests also support content-negotiation, so you could ask for JSON-LD if you prefer. Currently, however, this JSON-LD will not match the examples from the notification specification.
The above tells us where to send subscriptions and which features are supported for those services. You subscribe to a channel by POSTing a JSON-LD document to the subscription services. There are some small differences in the structure of these documents, depending on the channel type, which will be discussed below.
Subscription requests need to be authenticated using Solid-OIDC. The server will check whether you have Read permission on the resource you want to listen to. Requests without Read permission will be rejected.
There are currently up to two supported ways to get notifications in CSS, depending on your configuration: the notification channel types WebSocketChannel2023; and WebhookChannel2023.
To subscribe to the http://localhost:3000/foo resource using WebSockets, you use an authenticated POST request to send the following JSON-LD document to the server, at http://localhost:3000/.notifications/WebSocketChannel2023/:
{\n \"@context\": [ \"https://www.w3.org/ns/solid/notification/v1\" ],\n \"type\": \"http://www.w3.org/ns/solid/notifications#WebSocketChannel2023\",\n \"topic\": \"http://localhost:3000/foo\"\n}\n If you have Read permissions, the server's reply will look like this:
{\n \"@context\": [ \"https://www.w3.org/ns/solid/notification/v1\" ],\n \"id\": \"http://localhost:3000/.notifications/WebSocketChannel2023/dea6f614-08ab-4cc1-bbca-5dece0afb1e2\",\n \"type\": \"http://www.w3.org/ns/solid/notifications#WebSocketChannel2023\",\n \"topic\": \"http://localhost:3000/foo\",\n \"receiveFrom\": \"ws://localhost:3000/.notifications/WebSocketChannel2023/?auth=http%3A%2F%2Flocalhost%3A3000%2F.notifications%2FWebSocketChannel2023%2Fdea6f614-08ab-4cc1-bbca-5dece0afb1e2\"\n}\n The most important field is receiveFrom. This field tells you the WebSocket to which you need to connect, through which you will start receiving notifications. In JavaScript, this can be done using the WebSocket object, such as:
const ws = new WebSocket(receiveFrom);\nws.on('message', (notification) => console.log(notification));\n"},{"location":"usage/notifications/#webhooks","title":"Webhooks","text":"Similar to the WebSocket subscription, below is sample JSON-LD that would be sent to http://localhost:3000/.notifications/WebhookChannel2023/:
{\n \"@context\": [ \"https://www.w3.org/ns/solid/notification/v1\" ],\n \"type\": \"http://www.w3.org/ns/solid/notifications#WebhookChannel2023\",\n \"topic\": \"http://localhost:3000/foo\",\n \"sendTo\": \"https://example.com/webhook\"\n}\n Note that this document has an additional sendTo field. This is the Webhook URL of your server, the URL to which you want the notifications to be sent.
The response would then be something like this:
{\n \"@context\": [ \"https://www.w3.org/ns/solid/notification/v1\" ],\n \"id\": \"http://localhost:3000/.notifications/WebhookChannel2023/eeaf2c17-699a-4e53-8355-e91d13807e5f\",\n \"type\": \"http://www.w3.org/ns/solid/notifications#WebhookChannel2023\",\n \"topic\": \"http://localhost:3000/foo\",\n \"sendTo\": \"https://example.com/webhook\"\n}\n"},{"location":"usage/notifications/#unsubscribing-from-a-notification-channel","title":"Unsubscribing from a notification channel","text":"Note
This feature is not part of the Solid Notification v0.2 specification so might be changed or removed in the future.
If you no longer want to receive notifications on the channel you created, you can send a DELETE request to the channel to remove it. Use the value found in the id field of the subscription response. There is no way to retrieve this identifier later on, so make sure to keep track of it just in case you want to unsubscribe at some point. No authorization is needed for this request.
Below is an example notification that would be sent when a resource changes:
{\n \"@context\": [\n \"https://www.w3.org/ns/activitystreams\",\n \"https://www.w3.org/ns/solid/notification/v1\"\n ],\n \"id\": \"urn:123456:http://example.com/foo\",\n \"type\": \"Update\",\n \"object\": \"http://localhost:3000/foo\",\n \"state\": \"987654\",\n \"published\": \"2023-02-09T15:08:12.345Z\"\n}\n A notification contains the following fields:
id: A unique identifier for this notification.type: What happened to trigger the notification. We discuss the possible values below.object: The resource that changed.state: An identifier indicating the state of the resource. This corresponds to the ETag value you get when doing a request on the resource itself.published: When this change occurred.CSS supports five different notification types that the client can receive. The format of the notification can slightly change depending on the type.
Resource notification types:
Create: When the resource is created.Update: When the existing resource is changed.Delete: When the resource is deleted. Does not have a state field.Additionally, when listening to a container, there are two extra notifications that are sent out when the contents of the container change. For these notifications, the object fields references the resource that was added or removed, while the new target field references the container itself.
Add: When a new resource is added to the container.Remove: When a resource is removed from the container.The Solid notification specification describes several extra features that can be supported by notification channels. By default, these are all supported on the channels of a CSS instance, as can be seen in the descriptions returned by the server above. Each feature can be enabled by adding a field to the JSON-LD you send during subscription. The available fields are:
startAt: An xsd:dateTime describing when you want notifications to start. No notifications will be sent on this channel before this time.endAt: An xsd:dateTime describing when you want notifications to stop. The channel will be destroyed at that time, and no more notifications will be sent.state: A string corresponding to the state string of a resource notification. If this value differs from the actual state of the resource, a notification will be sent out immediately to inform the client that its stored state is outdated.rate: An xsd:duration indicating how often notifications can be sent out. A new notification will only be sent out after this much time has passed since the previous notification.accept: A description of the content-type(s) in which the client would want to receive the notifications. Expects the same values as an Accept HTTP header.There is not much restriction on who can create a new notification channel; only Read permissions on the target resource are required. It is therefore possible for the server to accumulate created channels. As these channels still get used every time their corresponding resource changes, this could degrade server performance.
For this reason, the default server configuration removes notification channels after two weeks (20160 minutes). You can modify this behaviour by adding the following block to your configuration:
{\n \"@id\": \"urn:solid-server:default:WebSocket2023Subscriber\",\n \"@type\": \"NotificationSubscriber\",\n \"maxDuration\": 20160\n}\n maxDuration defines after how many minutes every channel will be removed. Setting this value to 0 will allow channels to exist forever. Similarly, to change the maximum duration of webhook channels you can use the identifier urn:solid-server:default:WebhookSubscriber.
If you need to seed accounts and pods, the --seedConfig command line option can be used with as value the path to a JSON file containing configurations for every required pod. The file needs to contain an array of JSON objects, with each object containing at least an email, and password field. Multiple pod objects can also be assigned to such an object in the pods array to create pods for the account, with contents being the same as its corresponding JSON API.
For example:
[\n {\n \"email\": \"hello@example.com\",\n \"password\": \"abc123\"\n },\n {\n \"email\": \"hello2@example.com\",\n \"password\": \"123abc\",\n \"pods\": [\n { \"name\": \"pod1\" },\n { \"name\": \"pod2\" }\n ]\n }\n]\n This feature cannot be used to register pods with pre-existing WebIDs, which requires an interactive validation step, unless you disable the WebID ownership check in your server configuration.
Note that pod seeding is made for a default server setup with standard email/password login. If you add a new login method you will need to create a new implementation of pod seeding if you want to use it.
"},{"location":"usage/starting-server/","title":"Starting the Community Solid Server","text":""},{"location":"usage/starting-server/#quickly-spinning-up-a-server","title":"Quickly spinning up a server","text":"Use Node.js\u00a018.0 or up and execute:
npx @solid/community-server\n Now visit your brand new server at http://localhost:3000/!
To persist your pod's contents between restarts, use:
npx @solid/community-server -c @css:config/file.json -f data/\n"},{"location":"usage/starting-server/#local-installation","title":"Local installation","text":"Install the npm package globally with:
npm install -g @solid/community-server\n To run the server with in-memory storage, use:
community-solid-server # add parameters if needed\n To run the server with your current folder as storage, use:
community-solid-server -c @css:config/file.json -f data/\n"},{"location":"usage/starting-server/#configuring-the-server","title":"Configuring the server","text":"The Community Solid Server is designed to be flexible such that people can easily run different configurations. This is useful for customizing the server with plugins, testing applications in different setups, or developing new parts for the server without needing to change its base code.
An easy way to customize the server is by passing parameters to the server command. These parameters give you direct access to some commonly used settings:
parameter name default value description--port, -p 3000 The TCP port on which the server should listen. --baseUrl, -b http://localhost:$PORT/ The base URL used internally to generate URLs. Change this if your server does not run on http://localhost:$PORT/. --socket The Unix Domain Socket on which the server should listen. --baseUrl must be set if this option is provided --loggingLevel, -l info The detail level of logging; useful for debugging problems. Use debug for full information. --config, -c @css:config/default.json The configuration(s) for the server. The default only stores data in memory; to persist to your filesystem, use @css:config/file.json --rootFilePath, -f ./ Root folder where the server stores data, when using a file-based configuration. --sparqlEndpoint, -s URL of the SPARQL endpoint, when using a quadstore-based configuration. --showStackTrace, -t false Enables detailed logging on error output. --podConfigJson ./pod-config.json Path to the file that keeps track of dynamic Pod configurations. Only relevant when using @css:config/dynamic.json. --seedConfig Path to the file that keeps track of seeded account configurations. --mainModulePath, -m Path from where Components.js will start its lookup when initializing configurations. --workers, -w 1 Run in multithreaded mode using workers. Special values are -1 (scale to num_cores-1), 0 (scale to num_cores) and 1 (singlethreaded). Parameters can also be passed through environment variables.
They are prefixed with CSS_ and converted from camelCase to CAMEL_CASE
eg. --showStackTrace => CSS_SHOW_STACK_TRACE
Command-line arguments will always override environment variables.
"},{"location":"usage/starting-server/#alternative-ways-to-run-the-server","title":"Alternative ways to run the server","text":""},{"location":"usage/starting-server/#from-source","title":"From source","text":"If you rather prefer to run the latest source code version, or if you want to try a specific branch of the code, you can use:
git clone https://github.com/CommunitySolidServer/CommunitySolidServer.git\ncd CommunitySolidServer\nnpm ci\nnpm start -- # add parameters if needed\n"},{"location":"usage/starting-server/#via-docker","title":"Via Docker","text":"Docker allows you to run the server without having Node.js installed. Images are built on each tagged version and hosted on Docker Hub.
# Clone the repo to get access to the configs\ngit clone https://github.com/CommunitySolidServer/CommunitySolidServer.git\ncd CommunitySolidServer\n# Run the image, serving your `~/Solid` directory on `http://localhost:3000`\ndocker run --rm -v ~/Solid:/data -p 3000:3000 -it solidproject/community-server:latest\n# Or use one of the built-in configurations\ndocker run --rm -p 3000:3000 -it solidproject/community-server -c config/default.json\n# Or use your own configuration mapped to the right directory\ndocker run --rm -v ~/solid-config:/config -p 3000:3000 -it solidproject/community-server -c /config/my-config.json\n# Or use environment variables to configure your css instance\ndocker run --rm -v ~/Solid:/data -p 3000:3000 -it -e CSS_CONFIG=config/file-no-setup.json -e CSS_LOGGING_LEVEL=debug solidproject/community-server\n"},{"location":"usage/starting-server/#using-a-helm-chart","title":"Using a Helm Chart","text":"The official Helm Chart for Kubernetes deployment is maintained at CommunitySolidServer/css-helm-chart and published on ArtifactHUB. There you will find complete installation instructions.
# Summary\nhelm repo add community-solid-server https://communitysolidserver.github.io/css-helm-chart/charts/\nhelm install my-css community-solid-server/community-solid-server\n"},{"location":"usage/account/json-api/","title":"Account management JSON API","text":"Everything related to account management is done through a JSON API, of which we will describe all paths below. There are also HTML pages available to handle account management that use these APIs internally. Links to these can be found in the HTML controls All APIs expect JSON as input, and will return JSON objects as output.
"},{"location":"usage/account/json-api/#finding-api-urls","title":"Finding API URLs","text":"All URLs below are relative to the index account API URL, which by default is http://localhost:3000/.account/. Every response of an API request will contain a controls object, containing all the URLs of the other API endpoints. It is generally advised to make use of these controls instead of hardcoding the URLs. Only the initial index URL needs to be known then to find the controls. Certain controls will be missing if those features are disabled in the configuration.
Many APIs require a POST request to perform an action. When doing a GET request on these APIs they will return an object describing what input is expected for the POST.
"},{"location":"usage/account/json-api/#authorization","title":"Authorization","text":"After logging in, the API will return a set-cookie header of the format css-account=$VALUE This cookie is necessary to have access to many of the APIs. When including this cookie, the controls object will also be extended with new URLs that are now accessible. When logging in, the response body JSON body will also contain an authorization field containing the $VALUE value mentioned above. Instead of using cookies, this value can be used in an Authorization header with value CSS-Account-Token $VALUE to achieve the same result.
The expiration time of this cookie will be refreshed every time there is a successful request to the server with that cookie.
"},{"location":"usage/account/json-api/#redirecting","title":"Redirecting","text":"As redirects through status codes 3xx can make working with JSON APIs more difficult, the API will never make use of this. Instead, if a redirect is required after an action, the response JSON object will return a location field. This is the next URL that should be fetched. This is mostly relevant in OIDC interactions as these cause the interaction to progress.
Below is an overview of all the keys in a controls object returned by the server, with all features enabled. An example of what such an object looks like can be found at the bottom of the page.
"},{"location":"usage/account/json-api/#controlsmain","title":"controls.main","text":"General controls that require no authentication.
"},{"location":"usage/account/json-api/#controlsmainindex","title":"controls.main.index","text":"General entrypoint to the API. Returns an empty object, including the controls, on all GET requests.
"},{"location":"usage/account/json-api/#controlsmainlogins","title":"controls.main.logins","text":"Returns an overview of all login systems available on the server in logins object. Keys are a string description of the login system and values are links to their login pages. This can be used to let users choose how they want to log in. By default, the object only contains the email/password login system.
All controls related to account management. All of these require authorization, except for the create action.
"},{"location":"usage/account/json-api/#controlsaccountcreate","title":"controls.account.create","text":"Creates a new account on empty POST requests. The response contains the necessary cookie values to log in. This account can not be used until a login method has been added to it. All other interactions will fail until this is the case. See the controls.password.create section below for more information on how to do this. This account will expire after some time if no login method is added.
"},{"location":"usage/account/json-api/#controlsaccountlogout","title":"controls.account.logout","text":"Logs the account out on an empty POST request. Invalidates the cookie that was used.
"},{"location":"usage/account/json-api/#controlsaccountwebid","title":"controls.account.webId","text":"GET requests return all WebIDs linked to this account in the following format:
{\n \"webIdLinks\": {\n \"http://localhost:3000/test/profile/card#me\": \"http://localhost:3000/.account/account/c63c9e6f-48f8-40d0-8fec-238da893a7f2/webid/fdfc48c1-fe6f-4ce7-9e9f-1dc47eff803d/\"\n }\n}\n The URL value is the resource URL corresponding to the link with this WebID. The link can be removed by sending a DELETE request to that URL.
POST requests link a WebID to the account, allowing the account to identify as that WebID during an OIDC authentication interaction. Expected input is an object containing a webId field. The response will include the resource URL.
If the chosen WebID is contained within a Solid pod created by this account, the request will succeed immediately. If not, an error will be thrown, asking the user to add a specific triple to the WebID to confirm that they are the owner. After this triple is added, a second request will be successful.
"},{"location":"usage/account/json-api/#controlsaccountpod","title":"controls.account.pod","text":"GET requests return all pods created by this account in the following format:
{\n \"pods\": {\n \"http://localhost:3000/test/\": \"http://localhost:3000/.account/account/c63c9e6f-48f8-40d0-8fec-238da893a7f2/pod/df2d5a06-3ecd-4eaf-ac8f-b88a8579e100/\"\n }\n}\n The URL value is the resource URL corresponding to the link with this WebID. Doing a GET request to this resource will return the base URl of the pod, and all its owners of a pod, as shown below. You can send a POST request to this resource with a webId and visible: boolean field to add/update an owner and set its visibility. Visibility determines whether the owner is exposed through a link header when requesting the pod. You can also send a POST request to this resource with a webId and remove: true field to remove the owner.
{\n \"baseUrl\": \"http://localhost:3000/my-pod/\",\n \"owners\": [\n {\n \"webId\": \"http://localhost:3000/my-pod/profile/card#me\",\n \"visible\": false\n }\n ]\n}\n POST requests to controls.account.pod create a Solid pod for the account. The only required field is name, which will determine the name of the pod.
Additionally, a settings object can be sent along, the values of which will be sent to the templates used when generating the pod. If this settings object contains a webId field, that WebID will be the WebID that has initial access to the pod.
If no WebID value is provided, a WebID will be generated in the pod and immediately linked to the account as described in controls.account.webID. This WebID will then be the WebID that has initial access.
"},{"location":"usage/account/json-api/#controlsaccountclientcredentials","title":"controls.account.clientCredentials","text":"GET requests return all client credentials created by this account in the following format:
{\n \"clientCredentials\": {\n \"token_562cdeb5-d4b2-4905-9e62-8969ac10daaa\": \"http://localhost:3000/.account/account/c63c9e6f-48f8-40d0-8fec-238da893a7f2/client-credentials/063ee3a7-e80f-4508-9f79-ffddda9df8d4/\"\n }\n}\n The URL value is the resource URL corresponding to that specific token. Sending a GET request to that URL will return information about the token, such as what the associated WebID is. The token can be removed by sending a DELETE request to that URL.
Creates a client credentials token on POST requests. More information on these tokens can be found here. Expected input is an object containing a name and webId field. The name is optional and will be used to name the token, the WebID determines which WebID you will identify as when using that token. It needs to be a WebID linked to the account as described in controls.account.webID.
Controls related to managing the email/password login method.
"},{"location":"usage/account/json-api/#controlspasswordcreate","title":"controls.password.create","text":"GET requests return all email/password logins of this account in the following format:
{\n \"passwordLogins\": {\n \"test@example.com\": \"http://localhost:3000/.account/account/c63c9e6f-48f8-40d0-8fec-238da893a7f2/login/password/7f042779-e2b2-444d-8cd9-50bd9cfa516d/\"\n }\n}\n The URL value is the resource URL corresponding to the login with the given email address. The login can be removed by sending a DELETE request to that URL. The password can be updated by sending a POST request to that URL with the body containing an oldPassword and a newPassword field.
POST requests create an email/password login and adds it to the account you are logged in as. Expects email and password fields.
POST requests log a user in and return the relevant cookie values. Expected fields are email, password, and optionally a remember boolean. The remember value determines if the returned cookie is only valid for the session, or for a longer time.
Can be used when a user forgets their password. POST requests with an email field will send an email with a link to reset the password.
Used to handle reset password URLs generated when a user forgets their password. Expected input values for the POST request are recordId, which was generated when sending the reset mail, and password with the new password value.
These controls are related to completing OIDC interactions.
"},{"location":"usage/account/json-api/#controlsoidccancel","title":"controls.oidc.cancel","text":"Sending a POST request to this API will cancel the OIDC interaction and return the user to the client that started the interaction.
"},{"location":"usage/account/json-api/#controlsoidcprompt","title":"controls.oidc.prompt","text":"This API is used to determine what the next necessary step is in the OIDC interaction. The response will contain a location field, containing the URL to the next page the user should go to, and a prompt field, indicating the next step that is necessary to progress the OIDC interaction. The three possible prompts are the following:
Relevant for solving the login prompt. GET request will return a list of WebIDs the user can choose from. This is the same result as requesting the account information and looking at the linked WebIDs. The POST requests expects a webId value and optionally a remember boolean. The latter determines if the server should remember the picked WebID for later interactions.
POST requests to this API will cause the OIDC interaction to forget the picked WebID so a new one can be picked by the user.
"},{"location":"usage/account/json-api/#controlsoidcconsent","title":"controls.oidc.consent","text":"A GET request to this API will return all the relevant information about the client doing the request. A POST requests causes the OIDC interaction to finish. It can have an optional remember value, which allows for refresh tokens if it is set to true.
All these controls link to HTML pages and are thus mostly relevant to provide links to let the user navigate around. The most important one is probably controls.html.account.account which links to an overview page for the account.
Below is an example of a controls object in a response.
{\n \"main\": {\n \"index\": \"http://localhost:3000/.account/\",\n \"logins\": \"http://localhost:3000/.account/login/\"\n },\n \"account\": {\n \"create\": \"http://localhost:3000/.account/account/\",\n \"logout\": \"http://localhost:3000/.account/account/ade5c046-e882-4b56-80f4-18cb16433360/logout/\",\n \"webId\": \"http://localhost:3000/.account/account/ade5c046-e882-4b56-80f4-18cb16433360/webid/\",\n \"pod\": \"http://localhost:3000/.account/account/ade5c046-e882-4b56-80f4-18cb16433360/pod/\",\n \"clientCredentials\": \"http://localhost:3000/.account/account/ade5c046-e882-4b56-80f4-18cb16433360/client-credentials/\"\n },\n \"password\": {\n \"create\": \"http://localhost:3000/.account/account/ade5c046-e882-4b56-80f4-18cb16433360/login/password/\",\n \"login\": \"http://localhost:3000/.account/login/password/\",\n \"forgot\": \"http://localhost:3000/.account/login/password/forgot/\",\n \"reset\": \"http://localhost:3000/.account/login/password/reset/\"\n },\n \"oidc\": {\n \"cancel\": \"http://localhost:3000/.account/oidc/cancel/\",\n \"prompt\": \"http://localhost:3000/.account/oidc/prompt/\",\n \"webId\": \"http://localhost:3000/.account/oidc/pick-webid/\",\n \"forgetWebId\": \"http://localhost:3000/.account/oidc/forget-webid/\",\n \"consent\": \"http://localhost:3000/.account/oidc/consent/\"\n },\n \"html\": {\n \"main\": {\n \"login\": \"http://localhost:3000/.account/login/\"\n },\n \"account\": {\n \"createClientCredentials\": \"http://localhost:3000/.account/account/ade5c046-e882-4b56-80f4-18cb16433360/client-credentials/\",\n \"createPod\": \"http://localhost:3000/.account/account/ade5c046-e882-4b56-80f4-18cb16433360/pod/\",\n \"linkWebId\": \"http://localhost:3000/.account/account/ade5c046-e882-4b56-80f4-18cb16433360/webid/\",\n \"account\": \"http://localhost:3000/.account/account/ade5c046-e882-4b56-80f4-18cb16433360/\"\n },\n \"password\": {\n \"register\": \"http://localhost:3000/.account/login/password/register/\",\n \"login\": \"http://localhost:3000/.account/login/password/\",\n \"create\": \"http://localhost:3000/.account/account/ade5c046-e882-4b56-80f4-18cb16433360/login/password/\",\n \"forgot\": \"http://localhost:3000/.account/login/password/forgot/\"\n }\n }\n}\n"},{"location":"usage/account/login-method/","title":"Adding a new login method","text":"By default, the server allows users to use email/password combinations to identify as the owner of their account. But, just like with many other parts of the server, this can be extended so other login methods can be used. Here we'll cover everything that is necessary.
"},{"location":"usage/account/login-method/#components","title":"Components","text":"These are the components that are needed for adding a new login method. Not all of these are mandatory, but they can make the life of the user easier when trying to find and use the new method. Also have a look at the general structure of new API components to see what is expected of such a component.
"},{"location":"usage/account/login-method/#create-component","title":"Create component","text":"There needs to be one or more components that allow a user to create an instance of the new login method and assign it to their account. The CreatePasswordHandler can be used as an example. This does not necessarily have to happen in a single request, potentially multiple requests can be used if the user has to perform actions on an external site for example. The only thing that matters is that at the end there is a new entry in the account's logins object.
When adding logins of your method a new key will need to be chosen to group these logins together. The email/password method uses password for example.
A new storage will probably need to be created to storage relevant metadata about this login method entry. Below is an example of how the PasswordStore is created:
{\n \"@id\": \"urn:solid-server:default:PasswordStore\",\n \"@type\": \"BasePasswordStore\",\n \"storage\": {\n \"@id\": \"urn:solid-server:default:PasswordStorage\",\n \"@type\": \"EncodingPathStorage\",\n \"relativePath\": \"/accounts/logins/password/\",\n \"source\": {\n \"@id\": \"urn:solid-server:default:KeyValueStorage\"\n }\n }\n}\n"},{"location":"usage/account/login-method/#login-component","title":"Login component","text":"After creating a login instance, a user needs to be able to log in using the new method. This can again be done with multiple API calls if necessary, but the final one needs to be one that handles the necessary actions such as creating a cookie and finishing the OIDC interaction if necessary. The ResolveLoginHandler can be extended to take care of most of this, the PasswordLoginHandler provides an example of this.
Besides creating a login instance and logging in, it is always possible to offer additional functionality specific to this login method. The email/password method, for example, also has components for password recovery and updating a password.
"},{"location":"usage/account/login-method/#html-pages","title":"HTML pages","text":"To make the life easier for users, at the very least you probably want to make an HTML page which people can use to create an instance of your login method. Besides that you could also make a page where people can combine creating an account with creating a login instance. The templates/identity folder contains all the pages the server has by default, which can be used as inspiration.
These pages need to be linked to the urn:solid-server:default:HtmlViewHandler. Below is an example of this:
{\n \"@id\": \"urn:solid-server:default:HtmlViewHandler\",\n \"@type\": \"HtmlViewHandler\",\n \"templates\": [{\n \"@id\": \"urn:solid-server:default:CreatePasswordHtml\",\n \"@type\": \"HtmlViewEntry\",\n \"filePath\": \"@css:templates/identity/password/create.html.ejs\",\n \"route\": {\n \"@id\": \"urn:solid-server:default:AccountPasswordRoute\"\n }\n }]\n}\n"},{"location":"usage/account/login-method/#updating-the-login-handler","title":"Updating the login handler","text":"The urn:solid-server:default:LoginHandler returns a list of available login methods, which are used to offer users a choice of which login method they want to use on the default login page. If you want the new method to also be offered you will have to add similar Components.js configuration:
{\n \"@id\": \"urn:solid-server:default:LoginHandler\",\n \"@type\": \"ControlHandler\",\n \"controls\": [\n {\n \"ControlHandler:_controls_key\": \"Email/password combination\",\n \"ControlHandler:_controls_value\": {\n \"@id\": \"urn:solid-server:default:LoginPasswordRoute\"\n }\n }\n ]\n}\n"},{"location":"usage/account/login-method/#controls","title":"Controls","text":"All new relevant API endpoints should be added to the controls object, otherwise there is no way for users to find out where to send their requests. Similarly, links to the HTML pages should also be in the controls, so they can be navigated to. Examples of how to do this can be found here.
The default account overview page makes some assumptions about the controls when building the page. Specifically, it checks if controls.html.<LOGIN_METHOD>.create exists, if yes, it automatically creates a link on the page so users can create new login instances for their account.
Below is a description of the changes that are necessary to migration data from v6 to v7 of the server.
"},{"location":"usage/account/migration/#account-data","title":"Account data","text":"The format of the \"Forgot passwords records was changed\", but seeing as those are not important and new ones can be created if necessary, these can just be removed when migrating. By default, these were located in the .internal/forgot-password/ folder so this entire folder can be removed.
For existing accounts, the data was stored in the following format and location. Additionally to the details below, the tail of all resource identifiers were base64 encoded.
.internal/accounts/\"account/\" + encodeURIComponent(email){ webId, email, password, verified }.internal/accounts/, so same location as the account datawebId{ useIdp, podBaseUrl?, clientCredentials? }useIdp indicates if the WebID is linked to the account for identification.podBaseUrl is defined if the account was created with a pod.clientCredentials is an array containing the labels of all client credentials tokens created by the account..internal/accounts/credentials/{ webId, secret }The V6MigrationInitializer class is responsible for migrating from this format to the new one and does so by reading in the old data and creating new instances in the IndexedStorage. In case you have an instance that made impactful changes to how storage is handled that would be the class to investigate and replace. Password data can be reused as the algorithm there was not changed. Email addresses are now stored in lowercase, so these need to be converted during migration.
The format of all other internal data was changed in the same way:
All internal storage that is not account data as described in the previous section will be removed to prevent issues with outdated formats. This applies to the following stored data:
.internal/idp/keys/.internal/idp/adapter/.internal/notifications/rootInitialized key, which prevents initialized roots from being overwritten..internal/setup/// This assumes your server is started under http://localhost:3000/.
// It also assumes you have already logged in and `cookie` contains a valid cookie header
// as described in the API documentation.