mirror of
https://github.com/CommunitySolidServer/CommunitySolidServer.git
synced 2024-10-03 14:55:10 +00:00
1 line
71 KiB
JSON
1 line
71 KiB
JSON
{"config":{"indexing":"full","lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Welcome \u00b6 Welcome to the Community Solid Server! Here we will cover many aspects of the server, such as how to propose changes, what the architecture looks like, and how to use many of the features the server provides. The documentation here is still incomplete both in content and structure, so feel free to open a discussion about things you want to see added. While we try to update this documentation together with updates in the code, it is always possible we miss something, so please report it if you find incorrect information or links that no longer work. An introductory tutorial that gives a quick overview of the Solid and CSS basics can be found here . This is a good way to get started with the server and its setup. If you want to know what is new in the latest version, you can check out the release notes for a high level overview and information on how to migrate your configuration to the next version. A list that includes all minor changes can be found in the changelog Using the server \u00b6 Basic example HTTP requests How to use the Identity Provider How to automate authentication How to automatically seed pods on startup What the internals look like \u00b6 How the server uses dependency injection What the architecture looks like Making changes \u00b6 How to make changes to the repository For core developers with push access only: How to release a new version","title":"Welcome"},{"location":"#welcome","text":"Welcome to the Community Solid Server! Here we will cover many aspects of the server, such as how to propose changes, what the architecture looks like, and how to use many of the features the server provides. The documentation here is still incomplete both in content and structure, so feel free to open a discussion about things you want to see added. While we try to update this documentation together with updates in the code, it is always possible we miss something, so please report it if you find incorrect information or links that no longer work. An introductory tutorial that gives a quick overview of the Solid and CSS basics can be found here . This is a good way to get started with the server and its setup. If you want to know what is new in the latest version, you can check out the release notes for a high level overview and information on how to migrate your configuration to the next version. A list that includes all minor changes can be found in the changelog","title":"Welcome"},{"location":"#using-the-server","text":"Basic example HTTP requests How to use the Identity Provider How to automate authentication How to automatically seed pods on startup","title":"Using the server"},{"location":"#what-the-internals-look-like","text":"How the server uses dependency injection What the architecture looks like","title":"What the internals look like"},{"location":"#making-changes","text":"How to make changes to the repository For core developers with push access only: How to release a new version","title":"Making changes"},{"location":"architecture/","text":"Architecture overview \u00b6 The initial architecture document the project was started from can be found here . Many things have been added since the original inception of the project, but the core ideas within that document are still valid. As can be seen from the architecture, an important idea is the modularity of all components. No actual implementations are defined there, only their interfaces. Making all the components independent of each other in such a way provides us with an enormous flexibility: they can all be replaced by a different implementation, without impacting anything else. This is how we can provide many different configurations for the server, and why it is impossible to provide ready solutions for all possible combinations. Handlers \u00b6 A very important building block that gets reused in many places is the AsyncHandler . The idea is that a handler has 2 important functions. canHandle determines if this class is capable of correctly handling the request, and throws an error if it can not. For example, a class that converts JSON-LD to turtle can handle all requests containing JSON-LD data, but does not know what to do with a request that contains a JPEG. The second function is handle where the class executes on the input data and returns the result. If an error gets thrown here it means there is an issue with the input. For example, if the input data claims to be JSON-LD but is actually not. The power of using this interface really shines when using certain utility classes. The one we use the most is the WaterfallHandler , which takes as input a list of handlers of the same type. The input and output of a WaterfallHandler is the same as those of its inputs, meaning it can be used in the same places. When doing a canHandle call, it will iterate over all its input handlers to find the first one where the canHandle call succeeds, and when calling handle it will return the result of that specific handler. This allows us to chain together many handlers that each have their specific niche, such as handler that each support a specific HTTP method (GET/PUT/POST/etc.), or handlers that only take requests targeting a specific subset of URLs. To the parent class it will look like it has a handler that supports all methods, while in practice it will be a WaterfallHandler containing all these separate handlers. Some other utility classes are the ParallelHandler that runs all handlers simultaneously, and the SequenceHandler that runs all of them one after the other. Since multiple handlers are executed here, these only work for handlers that have no output. Streams \u00b6 Almost all data is handled in a streaming fashion. This allows us to work with very large resources without having to fully load them in memory, a client could be reading data that is being returned by the server while the server is still reading the file. Internally this means we are mostly handling data as Readable objects. We actually use Guarded<Readable> which is an internal format we created to help us with error handling. Such streams can be created using utility functions such as guardStream and guardedStreamFrom . Similarly, we have a pipeSafely to pipe streams in such a way that also helps with errors. Example request \u00b6 In this section we will give a high level overview of all the components a request passes through when it enters the server. This is specifically an LDP request, e.g. a POST request to create a new resource. The correct HttpHandler gets found, responsible for LDP requests. The HTTP request gets parsed into a manageable format, both body and metadata such as headers. The identification credentials of the request, if any, are extracted and parsed to authenticate the calling agent. The request gets authorized or rejected, based on the credentials from step 3 and the authorization rules of the target resource. Based on the HTTP method, the corresponding method from the ResourceStore gets called, which in the case of a POST request will return the location of the newly created error. The returned data and metadata get converted to an HTTP response and sent back in the ResponseWriter . In case any of the steps above error, an error will be thrown. The ErrorHandler will convert the error to an HTTP response to be returned. Below are sections that go deeper into the specific steps. Not all steps are covered yet and will be added in the future. How authentication and authorization work What the ResourceStore looks like","title":"Architecture"},{"location":"architecture/#architecture-overview","text":"The initial architecture document the project was started from can be found here . Many things have been added since the original inception of the project, but the core ideas within that document are still valid. As can be seen from the architecture, an important idea is the modularity of all components. No actual implementations are defined there, only their interfaces. Making all the components independent of each other in such a way provides us with an enormous flexibility: they can all be replaced by a different implementation, without impacting anything else. This is how we can provide many different configurations for the server, and why it is impossible to provide ready solutions for all possible combinations.","title":"Architecture overview"},{"location":"architecture/#handlers","text":"A very important building block that gets reused in many places is the AsyncHandler . The idea is that a handler has 2 important functions. canHandle determines if this class is capable of correctly handling the request, and throws an error if it can not. For example, a class that converts JSON-LD to turtle can handle all requests containing JSON-LD data, but does not know what to do with a request that contains a JPEG. The second function is handle where the class executes on the input data and returns the result. If an error gets thrown here it means there is an issue with the input. For example, if the input data claims to be JSON-LD but is actually not. The power of using this interface really shines when using certain utility classes. The one we use the most is the WaterfallHandler , which takes as input a list of handlers of the same type. The input and output of a WaterfallHandler is the same as those of its inputs, meaning it can be used in the same places. When doing a canHandle call, it will iterate over all its input handlers to find the first one where the canHandle call succeeds, and when calling handle it will return the result of that specific handler. This allows us to chain together many handlers that each have their specific niche, such as handler that each support a specific HTTP method (GET/PUT/POST/etc.), or handlers that only take requests targeting a specific subset of URLs. To the parent class it will look like it has a handler that supports all methods, while in practice it will be a WaterfallHandler containing all these separate handlers. Some other utility classes are the ParallelHandler that runs all handlers simultaneously, and the SequenceHandler that runs all of them one after the other. Since multiple handlers are executed here, these only work for handlers that have no output.","title":"Handlers"},{"location":"architecture/#streams","text":"Almost all data is handled in a streaming fashion. This allows us to work with very large resources without having to fully load them in memory, a client could be reading data that is being returned by the server while the server is still reading the file. Internally this means we are mostly handling data as Readable objects. We actually use Guarded<Readable> which is an internal format we created to help us with error handling. Such streams can be created using utility functions such as guardStream and guardedStreamFrom . Similarly, we have a pipeSafely to pipe streams in such a way that also helps with errors.","title":"Streams"},{"location":"architecture/#example-request","text":"In this section we will give a high level overview of all the components a request passes through when it enters the server. This is specifically an LDP request, e.g. a POST request to create a new resource. The correct HttpHandler gets found, responsible for LDP requests. The HTTP request gets parsed into a manageable format, both body and metadata such as headers. The identification credentials of the request, if any, are extracted and parsed to authenticate the calling agent. The request gets authorized or rejected, based on the credentials from step 3 and the authorization rules of the target resource. Based on the HTTP method, the corresponding method from the ResourceStore gets called, which in the case of a POST request will return the location of the newly created error. The returned data and metadata get converted to an HTTP response and sent back in the ResponseWriter . In case any of the steps above error, an error will be thrown. The ErrorHandler will convert the error to an HTTP response to be returned. Below are sections that go deeper into the specific steps. Not all steps are covered yet and will be added in the future. How authentication and authorization work What the ResourceStore looks like","title":"Example request"},{"location":"authorization/","text":"Authorization \u00b6 Authorization is usually handled by the AuthorizingHttpHandler , and goes in the following steps: Identify the credentials of the agent making the call. Extract which access modes are needed for the request. Reading the permissions the agent has. Compare the above results to see if the request is allowed. Authentication \u00b6 There are multiple CredentialsExtractor s that each determine identity in a different way. Potentially multiple extractors can apply, making a requesting agent have multiple credentials. The DPoPWebIdExtractor is most relevant for the Solid-OIDC specification , as it parses the access token generated by a Solid Identity Provider. Besides that there are always the public credentials, which everyone has. There are also some debug extractors that can be used to simulate credentials, which can be enabled as different options through the config/ldp/authentication imports. If successful, a CredentialsExtractor will return a key/value map linking the type of credentials to their specific values. Modes extraction \u00b6 Access modes are a predefined list of read , write , append , create and delete . The ModesExtractor s determine which modes will be necessary, based on the request contents. The MethodModesExtractor determines modes based on the HTTP method. A GET request will always need the read mode for example. Specifically for PATCH requests there are extractors for each supported PATCH type, such as the N3PatchModesExtractor , which parses the N3 Patch body to know if it will add new data or only delete data. Permission reading \u00b6 PermissionReaders take the input of the above to determine which permissions are available for which credentials. The modes from the previous step are not yet needed, but can be used as optimization as we only need to know if we have permission on those modes. Each reader can potentially return a potential answer if it only checks specific cases. Those results then get combined in the UnionPermissionReader . In the default configuration there are currently 4 relevant permission readers that get combined: PathBasedReader rejects all permissions for certain paths, to prevent access to internal data. OwnerPermissionReader grants control permissions to agents that are trying to access data in a pod that they own. AuxiliaryReader handles all permissions for auxiliary resources by requesting those of the subject resource if necessary. WebAclReader reads out the relevant .acl resource to read out the defined permissions. All of the above is if you have WebACL enabled. It is also possible to always grant all permissions for debugging reasons by changing the authorization import to config/ldp/authorization/allow-all.json . Authorization \u00b6 All the results of the previous steps then get combined to either allow or reject a request. If no permissions are found for a requested mode, or they are explicitly forbidden, a 401/403 will be returned, depending on if the agent was logged in or not.","title":"Authorization"},{"location":"authorization/#authorization","text":"Authorization is usually handled by the AuthorizingHttpHandler , and goes in the following steps: Identify the credentials of the agent making the call. Extract which access modes are needed for the request. Reading the permissions the agent has. Compare the above results to see if the request is allowed.","title":"Authorization"},{"location":"authorization/#authentication","text":"There are multiple CredentialsExtractor s that each determine identity in a different way. Potentially multiple extractors can apply, making a requesting agent have multiple credentials. The DPoPWebIdExtractor is most relevant for the Solid-OIDC specification , as it parses the access token generated by a Solid Identity Provider. Besides that there are always the public credentials, which everyone has. There are also some debug extractors that can be used to simulate credentials, which can be enabled as different options through the config/ldp/authentication imports. If successful, a CredentialsExtractor will return a key/value map linking the type of credentials to their specific values.","title":"Authentication"},{"location":"authorization/#modes-extraction","text":"Access modes are a predefined list of read , write , append , create and delete . The ModesExtractor s determine which modes will be necessary, based on the request contents. The MethodModesExtractor determines modes based on the HTTP method. A GET request will always need the read mode for example. Specifically for PATCH requests there are extractors for each supported PATCH type, such as the N3PatchModesExtractor , which parses the N3 Patch body to know if it will add new data or only delete data.","title":"Modes extraction"},{"location":"authorization/#permission-reading","text":"PermissionReaders take the input of the above to determine which permissions are available for which credentials. The modes from the previous step are not yet needed, but can be used as optimization as we only need to know if we have permission on those modes. Each reader can potentially return a potential answer if it only checks specific cases. Those results then get combined in the UnionPermissionReader . In the default configuration there are currently 4 relevant permission readers that get combined: PathBasedReader rejects all permissions for certain paths, to prevent access to internal data. OwnerPermissionReader grants control permissions to agents that are trying to access data in a pod that they own. AuxiliaryReader handles all permissions for auxiliary resources by requesting those of the subject resource if necessary. WebAclReader reads out the relevant .acl resource to read out the defined permissions. All of the above is if you have WebACL enabled. It is also possible to always grant all permissions for debugging reasons by changing the authorization import to config/ldp/authorization/allow-all.json .","title":"Permission reading"},{"location":"authorization/#authorization_1","text":"All the results of the previous steps then get combined to either allow or reject a request. If no permissions are found for a requested mode, or they are explicitly forbidden, a 401/403 will be returned, depending on if the agent was logged in or not.","title":"Authorization"},{"location":"client-credentials/","text":"Automating authentication with Client Credentials \u00b6 One potential issue for scripts and other applications is that it requires user interaction to log in and authenticate. The CSS offers an alternative solution for such cases by making use of Client Credentials. Once you have created an account as described in the Identity Provider section , users can request a token that apps can use to authenticate without user input. All requests to the client credentials API currently require you to send along the email and password of that account to identify yourself. This is a temporary solution until the server has more advanced account management, after which this API will change. Below is example code of how to make use of these tokens. It makes use of several utility functions from the Solid Authentication Client . Note that the code below uses top-level await , which not all JavaScript engines support, so this should all be contained in an async function. Generating a token \u00b6 The code below generates a token linked to your account and WebID. This only needs to be done once, afterwards this token can be used for all future requests. import fetch from 'node-fetch' ; // This assumes your server is started under http://localhost:3000/. // This URL can also be found by checking the controls in JSON responses when interacting with the IDP API, // as described in the Identity Provider section. const response = await fetch ( 'http://localhost:3000/idp/credentials/' , { method : 'POST' , headers : { 'content-type' : 'application/json' }, // The email/password fields are those of your account. // The name field will be used when generating the ID of your token. body : JSON.stringify ({ email : 'my-email@example.com' , password : 'my-account-password' , name : 'my-token' }), }); // These are the identifier and secret of your token. // Store the secret somewhere safe as there is no way to request it again from the server! const { id , secret } = await response . json (); Requesting an Access token \u00b6 The ID and secret combination generated above can be used to request an Access Token from the server. This Access Token is only valid for a certain amount of time, after which a new one needs to be requested. import { createDpopHeader , generateDpopKeyPair } from '@inrupt/solid-client-authn-core' ; import fetch from 'node-fetch' ; // A key pair is needed for encryption. // This function from `solid-client-authn` generates such a pair for you. const dpopKey = await generateDpopKeyPair (); // These are the ID and secret generated in the previous step. // Both the ID and the secret need to be form-encoded. const authString = ` ${ encodeURIComponent ( id ) } : ${ encodeURIComponent ( secret ) } ` ; // This URL can be found by looking at the \"token_endpoint\" field at // http://localhost:3000/.well-known/openid-configuration // if your server is hosted at http://localhost:3000/. const tokenUrl = 'http://localhost:3000/.oidc/token' ; const response = await fetch ( tokenUrl , { method : 'POST' , headers : { // The header needs to be in base64 encoding. authorization : `Basic ${ Buffer . from ( authString ). toString ( 'base64' ) } ` , 'content-type' : 'application/x-www-form-urlencoded' , dpop : await createDpopHeader ( tokenUrl , 'POST' , dpopKey ), }, body : 'grant_type=client_credentials&scope=webid' , }); // This is the Access token that will be used to do an authenticated request to the server. // The JSON also contains an \"expires_in\" field in seconds, // which you can use to know when you need request a new Access token. const { access_token : accessToken } = await response . json (); Using the Access token to make an authenticated request \u00b6 Once you have an Access token, you can use it for authenticated requests until it expires. import { buildAuthenticatedFetch } from '@inrupt/solid-client-authn-core' ; import fetch from 'node-fetch' ; // The DPoP key needs to be the same key as the one used in the previous step. // The Access token is the one generated in the previous step. const authFetch = await buildAuthenticatedFetch ( fetch , accessToken , { dpopKey }); // authFetch can now be used as a standard fetch function that will authenticate as your WebID. // This request will do a simple GET for example. const response = await authFetch ( 'http://localhost:3000/private' ); Deleting a token \u00b6 You can see all your existing tokens by doing a POST to http://localhost:3000/idp/credentials/ with as body a JSON object containing your email and password. The response will be a JSON list containing all your tokens. Deleting a token requires also doing a POST to the same URL, but adding a delete key to the JSON input object with as value the ID of the token you want to remove.","title":"Client credentials"},{"location":"client-credentials/#automating-authentication-with-client-credentials","text":"One potential issue for scripts and other applications is that it requires user interaction to log in and authenticate. The CSS offers an alternative solution for such cases by making use of Client Credentials. Once you have created an account as described in the Identity Provider section , users can request a token that apps can use to authenticate without user input. All requests to the client credentials API currently require you to send along the email and password of that account to identify yourself. This is a temporary solution until the server has more advanced account management, after which this API will change. Below is example code of how to make use of these tokens. It makes use of several utility functions from the Solid Authentication Client . Note that the code below uses top-level await , which not all JavaScript engines support, so this should all be contained in an async function.","title":"Automating authentication with Client Credentials"},{"location":"client-credentials/#generating-a-token","text":"The code below generates a token linked to your account and WebID. This only needs to be done once, afterwards this token can be used for all future requests. import fetch from 'node-fetch' ; // This assumes your server is started under http://localhost:3000/. // This URL can also be found by checking the controls in JSON responses when interacting with the IDP API, // as described in the Identity Provider section. const response = await fetch ( 'http://localhost:3000/idp/credentials/' , { method : 'POST' , headers : { 'content-type' : 'application/json' }, // The email/password fields are those of your account. // The name field will be used when generating the ID of your token. body : JSON.stringify ({ email : 'my-email@example.com' , password : 'my-account-password' , name : 'my-token' }), }); // These are the identifier and secret of your token. // Store the secret somewhere safe as there is no way to request it again from the server! const { id , secret } = await response . json ();","title":"Generating a token"},{"location":"client-credentials/#requesting-an-access-token","text":"The ID and secret combination generated above can be used to request an Access Token from the server. This Access Token is only valid for a certain amount of time, after which a new one needs to be requested. import { createDpopHeader , generateDpopKeyPair } from '@inrupt/solid-client-authn-core' ; import fetch from 'node-fetch' ; // A key pair is needed for encryption. // This function from `solid-client-authn` generates such a pair for you. const dpopKey = await generateDpopKeyPair (); // These are the ID and secret generated in the previous step. // Both the ID and the secret need to be form-encoded. const authString = ` ${ encodeURIComponent ( id ) } : ${ encodeURIComponent ( secret ) } ` ; // This URL can be found by looking at the \"token_endpoint\" field at // http://localhost:3000/.well-known/openid-configuration // if your server is hosted at http://localhost:3000/. const tokenUrl = 'http://localhost:3000/.oidc/token' ; const response = await fetch ( tokenUrl , { method : 'POST' , headers : { // The header needs to be in base64 encoding. authorization : `Basic ${ Buffer . from ( authString ). toString ( 'base64' ) } ` , 'content-type' : 'application/x-www-form-urlencoded' , dpop : await createDpopHeader ( tokenUrl , 'POST' , dpopKey ), }, body : 'grant_type=client_credentials&scope=webid' , }); // This is the Access token that will be used to do an authenticated request to the server. // The JSON also contains an \"expires_in\" field in seconds, // which you can use to know when you need request a new Access token. const { access_token : accessToken } = await response . json ();","title":"Requesting an Access token"},{"location":"client-credentials/#using-the-access-token-to-make-an-authenticated-request","text":"Once you have an Access token, you can use it for authenticated requests until it expires. import { buildAuthenticatedFetch } from '@inrupt/solid-client-authn-core' ; import fetch from 'node-fetch' ; // The DPoP key needs to be the same key as the one used in the previous step. // The Access token is the one generated in the previous step. const authFetch = await buildAuthenticatedFetch ( fetch , accessToken , { dpopKey }); // authFetch can now be used as a standard fetch function that will authenticate as your WebID. // This request will do a simple GET for example. const response = await authFetch ( 'http://localhost:3000/private' );","title":"Using the Access token to make an authenticated request"},{"location":"client-credentials/#deleting-a-token","text":"You can see all your existing tokens by doing a POST to http://localhost:3000/idp/credentials/ with as body a JSON object containing your email and password. The response will be a JSON list containing all your tokens. Deleting a token requires also doing a POST to the same URL, but adding a delete key to the JSON input object with as value the ID of the token you want to remove.","title":"Deleting a token"},{"location":"dependency-injection/","text":"Dependency injection \u00b6 The community server uses the dependency injection framework Components.js to link all class instances together, and uses Components-Generator.js to automatically generate the necessary description configurations of all classes. This framework allows us to configure our components in a JSON file. The advantage of this is that changing the configuration of components does not require any changes to the code, as one can just change the default configuration file, or provide a custom configuration file. More information can be found in the Components.js documentation , but a summarized overview can be found below. Component files \u00b6 Components.js requires a component file for every class you might want to instantiate. Fortunately those get generated automatically by Components-Generator.js. Calling npm run build will call the generator and generate those JSON-LD files in the dist folder. The generator uses the index.ts , so new classes always have to be added there or they will not get a component file. Configuration files \u00b6 Configuration files are how we tell Components.js which classes to instantiate and link together. All the community server configurations can be found in the config folder . That folder also contains information about how different pre-defined configurations can be used. A single component in such a configuration file might look as follows: { \"comment\" : \"Storage used for account management.\" , \"@id\" : \"urn:solid-server:default:AccountStorage\" , \"@type\" : \"JsonResourceStorage\" , \"source\" : { \"@id\" : \"urn:solid-server:default:ResourceStore\" }, \"baseUrl\" : { \"@id\" : \"urn:solid-server:default:variable:baseUrl\" }, \"container\" : \"/.internal/accounts/\" } With the corresponding constructor of the JsonResourceStorage class: public constructor ( source : ResourceStore , baseUrl : string , container : string ) The important elements here are the following: * \"comment\" : (optional) A description of this component. * \"@id\" : (optional) A unique identifier of this component, which allows it to be used as parameter values in different places. * \"@type\" : The class name of the component. This must be a TypeScript class name that is exported via index.ts . As you can see from the constructor, the other fields are direct mappings from the constructor parameters. source references another object, which we refer to using its identifier urn:solid-server:default:ResourceStore . baseUrl is just a string, but here we use a variable that was set before calling Components.js which is why it also references an @id . These variables are set when starting up the server, based on the command line parameters.","title":"Dependency injection"},{"location":"dependency-injection/#dependency-injection","text":"The community server uses the dependency injection framework Components.js to link all class instances together, and uses Components-Generator.js to automatically generate the necessary description configurations of all classes. This framework allows us to configure our components in a JSON file. The advantage of this is that changing the configuration of components does not require any changes to the code, as one can just change the default configuration file, or provide a custom configuration file. More information can be found in the Components.js documentation , but a summarized overview can be found below.","title":"Dependency injection"},{"location":"dependency-injection/#component-files","text":"Components.js requires a component file for every class you might want to instantiate. Fortunately those get generated automatically by Components-Generator.js. Calling npm run build will call the generator and generate those JSON-LD files in the dist folder. The generator uses the index.ts , so new classes always have to be added there or they will not get a component file.","title":"Component files"},{"location":"dependency-injection/#configuration-files","text":"Configuration files are how we tell Components.js which classes to instantiate and link together. All the community server configurations can be found in the config folder . That folder also contains information about how different pre-defined configurations can be used. A single component in such a configuration file might look as follows: { \"comment\" : \"Storage used for account management.\" , \"@id\" : \"urn:solid-server:default:AccountStorage\" , \"@type\" : \"JsonResourceStorage\" , \"source\" : { \"@id\" : \"urn:solid-server:default:ResourceStore\" }, \"baseUrl\" : { \"@id\" : \"urn:solid-server:default:variable:baseUrl\" }, \"container\" : \"/.internal/accounts/\" } With the corresponding constructor of the JsonResourceStorage class: public constructor ( source : ResourceStore , baseUrl : string , container : string ) The important elements here are the following: * \"comment\" : (optional) A description of this component. * \"@id\" : (optional) A unique identifier of this component, which allows it to be used as parameter values in different places. * \"@type\" : The class name of the component. This must be a TypeScript class name that is exported via index.ts . As you can see from the constructor, the other fields are direct mappings from the constructor parameters. source references another object, which we refer to using its identifier urn:solid-server:default:ResourceStore . baseUrl is just a string, but here we use a variable that was set before calling Components.js which is why it also references an @id . These variables are set when starting up the server, based on the command line parameters.","title":"Configuration files"},{"location":"example-requests/","text":"Interacting with the server \u00b6 PUT : Creating resources for a given URL \u00b6 Create a plain text file: curl -X PUT -H \"Content-Type: text/plain\" \\ -d \"abc\" \\ http://localhost:3000/myfile.txt Create a turtle file: curl -X PUT -H \"Content-Type: text/turtle\" \\ -d \"<ex:s> <ex:p> <ex:o>.\" \\ http://localhost:3000/myfile.ttl POST : Creating resources at a generated URL \u00b6 Create a plain text file: curl -X POST -H \"Content-Type: text/plain\" \\ -d \"abc\" \\ http://localhost:3000/ Create a turtle file: curl -X POST -H \"Content-Type: text/turtle\" \\ -d \"<ex:s> <ex:p> <ex:o>.\" \\ http://localhost:3000/ The response's Location header will contain the URL of the created resource. GET : Retrieving resources \u00b6 Retrieve a plain text file: curl -H \"Accept: text/plain\" \\ http://localhost:3000/myfile.txt Retrieve a turtle file: curl -H \"Accept: text/turtle\" \\ http://localhost:3000/myfile.ttl Retrieve a turtle file in a different serialization: curl -H \"Accept: application/ld+json\" \\ http://localhost:3000/myfile.ttl DELETE : Deleting resources \u00b6 curl -X DELETE http://localhost:3000/myfile.txt PATCH : Modifying resources \u00b6 Modify a resource using N3 Patch : curl -X PATCH -H \"Content-Type: text/n3\" \\ --data-raw \"@prefix solid: <http://www.w3.org/ns/solid/terms#>. _:rename a solid:InsertDeletePatch; solid:inserts { <ex:s2> <ex:p2> <ex:o2>. }.\" \\ http://localhost:3000/myfile.ttl Modify a resource using SPARQL Update : curl -X PATCH -H \"Content-Type: application/sparql-update\" \\ -d \"INSERT DATA { <ex:s2> <ex:p2> <ex:o2> }\" \\ http://localhost:3000/myfile.ttl HEAD : Retrieve resources headers \u00b6 curl -I -H \"Accept: text/plain\" \\ http://localhost:3000/myfile.txt OPTIONS : Retrieve resources communication options \u00b6 curl -X OPTIONS -i http://localhost:3000/myfile.txt","title":"Example request"},{"location":"example-requests/#interacting-with-the-server","text":"","title":"Interacting with the server"},{"location":"example-requests/#put-creating-resources-for-a-given-url","text":"Create a plain text file: curl -X PUT -H \"Content-Type: text/plain\" \\ -d \"abc\" \\ http://localhost:3000/myfile.txt Create a turtle file: curl -X PUT -H \"Content-Type: text/turtle\" \\ -d \"<ex:s> <ex:p> <ex:o>.\" \\ http://localhost:3000/myfile.ttl","title":"PUT: Creating resources for a given URL"},{"location":"example-requests/#post-creating-resources-at-a-generated-url","text":"Create a plain text file: curl -X POST -H \"Content-Type: text/plain\" \\ -d \"abc\" \\ http://localhost:3000/ Create a turtle file: curl -X POST -H \"Content-Type: text/turtle\" \\ -d \"<ex:s> <ex:p> <ex:o>.\" \\ http://localhost:3000/ The response's Location header will contain the URL of the created resource.","title":"POST: Creating resources at a generated URL"},{"location":"example-requests/#get-retrieving-resources","text":"Retrieve a plain text file: curl -H \"Accept: text/plain\" \\ http://localhost:3000/myfile.txt Retrieve a turtle file: curl -H \"Accept: text/turtle\" \\ http://localhost:3000/myfile.ttl Retrieve a turtle file in a different serialization: curl -H \"Accept: application/ld+json\" \\ http://localhost:3000/myfile.ttl","title":"GET: Retrieving resources"},{"location":"example-requests/#delete-deleting-resources","text":"curl -X DELETE http://localhost:3000/myfile.txt","title":"DELETE: Deleting resources"},{"location":"example-requests/#patch-modifying-resources","text":"Modify a resource using N3 Patch : curl -X PATCH -H \"Content-Type: text/n3\" \\ --data-raw \"@prefix solid: <http://www.w3.org/ns/solid/terms#>. _:rename a solid:InsertDeletePatch; solid:inserts { <ex:s2> <ex:p2> <ex:o2>. }.\" \\ http://localhost:3000/myfile.ttl Modify a resource using SPARQL Update : curl -X PATCH -H \"Content-Type: application/sparql-update\" \\ -d \"INSERT DATA { <ex:s2> <ex:p2> <ex:o2> }\" \\ http://localhost:3000/myfile.ttl","title":"PATCH: Modifying resources"},{"location":"example-requests/#head-retrieve-resources-headers","text":"curl -I -H \"Accept: text/plain\" \\ http://localhost:3000/myfile.txt","title":"HEAD: Retrieve resources headers"},{"location":"example-requests/#options-retrieve-resources-communication-options","text":"curl -X OPTIONS -i http://localhost:3000/myfile.txt","title":"OPTIONS: Retrieve resources communication options"},{"location":"identity-provider/","text":"Identity Provider \u00b6 Besides implementing the Solid protocol , the community server can also be an Identity Provider (IDP), officially known as an OpenID Provider (OP), following the Solid OIDC spec as much as possible. It is recommended to use the latest version of the Solid authentication client to interact with the server. The links here assume the server is hosted at http://localhost:3000/ . Registering an account \u00b6 To register an account, you can go to http://localhost:3000/idp/register/ if this feature is enabled, which it is on all configurations we provide. Currently our registration page ties 3 features together on the same page: * Creating an account on the server. * Creating or linking a WebID to your account. * Creating a pod on the server. Account \u00b6 To create an account you need to provide an email address and password. The password will be salted and hashed before being stored. As of now, the account is only used to log in and identify yourself to the IDP when you want to do an authenticated request, but in future the plan is to also use this for account/pod management. WebID \u00b6 We require each account to have a corresponding WebID. You can either let the server create a WebID for you in a pod, which will also need to be created then, or you can link an already existing WebID you have on an external server. In case you try to link your own WebID, you can choose if you want to be able to use this server as your IDP for this WebID. If not, you can still create a pod, but you will not be able to direct the authentication client to this server to identify yourself. Additionally, if you try to register with an external WebID, the first attempt will return an error indicating you need to add an identification triple to your WebID. After doing that you can try to register again. This is how we verify you are the owner of that WebID. After registration the next page will inform you that you have to add an additional triple to your WebID if you want to use the server as your IDP. All of the above is automated if you create the WebID on the server itself. Pod \u00b6 To create a pod you simply have to fill in the name you want your pod to have. This will then be used to generate the full URL of your pod. For example, if you choose the name test , your pod would be located at http://localhost:3000/test/ and your generated WebID would be http://localhost:3000/test/profile/card#me . The generated name also depends on the configuration you chose for your server. If you are using the subdomain feature, such as being done in the config/memory-subdomains.json configuration, the generated pod URL would be http://test.localhost:3000/ . Logging in \u00b6 When using an authenticating client, you will be redirected to a login screen asking for your email and password. After that you will be redirected to a page showing some basic information about the client. There you need to consent that this client is allowed to identify using your WebID. As a result the server will send a token back to the client that contains all the information needed to use your WebID. Forgot password \u00b6 If you forgot your password, you can recover it by going to http://localhost:3000/idp/forgotpassword/ . There you can enter your email address to get a recovery mail to reset your password. This feature only works if a mail server was configured, which by default is not the case. JSON API \u00b6 All of the above happens through HTML pages provided by the server. By default, the server uses the templates found in /templates/identity/email-password/ but different templates can be used through configuration. These templates all make use of a JSON API exposed by the server. For example, when doing a GET request to http://localhost:3000/idp/register/ with a JSON accept header, the following JSON is returned: { \"required\" : { \"email\" : \"string\" , \"password\" : \"string\" , \"confirmPassword\" : \"string\" , \"createWebId\" : \"boolean\" , \"register\" : \"boolean\" , \"createPod\" : \"boolean\" , \"rootPod\" : \"boolean\" }, \"optional\" : { \"webId\" : \"string\" , \"podName\" : \"string\" , \"template\" : \"string\" }, \"controls\" : { \"register\" : \"http://localhost:3000/idp/register/\" , \"index\" : \"http://localhost:3000/idp/\" , \"prompt\" : \"http://localhost:3000/idp/prompt/\" , \"login\" : \"http://localhost:3000/idp/login/\" , \"forgotPassword\" : \"http://localhost:3000/idp/forgotpassword/\" }, \"apiVersion\" : \"0.3\" } The required and optional fields indicate which input fields are expected by the API. These correspond to the fields of the HTML registration page. To register a user, you can do a POST request with a JSON body containing the correct fields: { \"email\" : \"test@example.com\" , \"password\" : \"secret\" , \"confirmPassword\" : \"secret\" , \"createWebId\" : true , \"register\" : true , \"createPod\" : true , \"rootPod\" : false , \"podName\" : \"test\" } Two fields here that are not covered on the HTML page above are rootPod and template . rootPod tells the server to put the pod in the root of the server instead of a location based on the podName . By default the server will reject requests where this is true , except during setup. template is only used by servers running the config/dynamic.json configuration, which is a very custom setup where every pod can have a different Components.js configuration, so this value can usually be ignored. IDP configuration \u00b6 The above descriptions cover server behaviour with most default configurations, but just like any other feature, there are several features that can be changed through the imports in your configuration file. All available options can be found in the config/identity/ folder . Below we go a bit deeper into the available options access \u00b6 The access option allows you to set authorization restrictions on the IDP API when enabled, similar to how authorization works on the LDP requests on the server. For example, if the server uses WebACL as authorization scheme, you can put a .acl resource in the /idp/register/ container to restrict who is allowed to access the registration API. Note that for everything to work there needs to be a .acl resource in /idp/ when using WebACL so resources can be accessed as usual when the server starts up. Make sure you change the permissions on /idp/.acl so not everyone can modify those. All of the above is only relevant if you use the restricted.json setting for this import. When you use public.json the API will simply always be accessible by everyone. email \u00b6 In case you want users to be able to reset their password when they forget it, you will need to tell the server which email server to use to send reset mails. example.json contains an example of what this looks like, which you will need to copy over to your base configuration and then remove the config/identity/email import. handler \u00b6 There is only one option here. This import contains all the core components necessary to make the IDP work. In case you need to make some changes to core IDP settings, this is where you would have to look. pod \u00b6 The pod options determines how pods are created. static.json is the expected pod behaviour as described above. dynamic.json is an experimental feature that allows users to have a custom Components.js configuration for their own pod. When using such a setup, a JSON file will be written containing all the information of the user pods so they can be recreated when the server restarts. registration \u00b6 This setting allows you to enable/disable registration on the server. Disabling registration here does not disable registration during setup, meaning you can still use this server as an IDP with the account created there.","title":"Identity provider"},{"location":"identity-provider/#identity-provider","text":"Besides implementing the Solid protocol , the community server can also be an Identity Provider (IDP), officially known as an OpenID Provider (OP), following the Solid OIDC spec as much as possible. It is recommended to use the latest version of the Solid authentication client to interact with the server. The links here assume the server is hosted at http://localhost:3000/ .","title":"Identity Provider"},{"location":"identity-provider/#registering-an-account","text":"To register an account, you can go to http://localhost:3000/idp/register/ if this feature is enabled, which it is on all configurations we provide. Currently our registration page ties 3 features together on the same page: * Creating an account on the server. * Creating or linking a WebID to your account. * Creating a pod on the server.","title":"Registering an account"},{"location":"identity-provider/#account","text":"To create an account you need to provide an email address and password. The password will be salted and hashed before being stored. As of now, the account is only used to log in and identify yourself to the IDP when you want to do an authenticated request, but in future the plan is to also use this for account/pod management.","title":"Account"},{"location":"identity-provider/#webid","text":"We require each account to have a corresponding WebID. You can either let the server create a WebID for you in a pod, which will also need to be created then, or you can link an already existing WebID you have on an external server. In case you try to link your own WebID, you can choose if you want to be able to use this server as your IDP for this WebID. If not, you can still create a pod, but you will not be able to direct the authentication client to this server to identify yourself. Additionally, if you try to register with an external WebID, the first attempt will return an error indicating you need to add an identification triple to your WebID. After doing that you can try to register again. This is how we verify you are the owner of that WebID. After registration the next page will inform you that you have to add an additional triple to your WebID if you want to use the server as your IDP. All of the above is automated if you create the WebID on the server itself.","title":"WebID"},{"location":"identity-provider/#pod","text":"To create a pod you simply have to fill in the name you want your pod to have. This will then be used to generate the full URL of your pod. For example, if you choose the name test , your pod would be located at http://localhost:3000/test/ and your generated WebID would be http://localhost:3000/test/profile/card#me . The generated name also depends on the configuration you chose for your server. If you are using the subdomain feature, such as being done in the config/memory-subdomains.json configuration, the generated pod URL would be http://test.localhost:3000/ .","title":"Pod"},{"location":"identity-provider/#logging-in","text":"When using an authenticating client, you will be redirected to a login screen asking for your email and password. After that you will be redirected to a page showing some basic information about the client. There you need to consent that this client is allowed to identify using your WebID. As a result the server will send a token back to the client that contains all the information needed to use your WebID.","title":"Logging in"},{"location":"identity-provider/#forgot-password","text":"If you forgot your password, you can recover it by going to http://localhost:3000/idp/forgotpassword/ . There you can enter your email address to get a recovery mail to reset your password. This feature only works if a mail server was configured, which by default is not the case.","title":"Forgot password"},{"location":"identity-provider/#json-api","text":"All of the above happens through HTML pages provided by the server. By default, the server uses the templates found in /templates/identity/email-password/ but different templates can be used through configuration. These templates all make use of a JSON API exposed by the server. For example, when doing a GET request to http://localhost:3000/idp/register/ with a JSON accept header, the following JSON is returned: { \"required\" : { \"email\" : \"string\" , \"password\" : \"string\" , \"confirmPassword\" : \"string\" , \"createWebId\" : \"boolean\" , \"register\" : \"boolean\" , \"createPod\" : \"boolean\" , \"rootPod\" : \"boolean\" }, \"optional\" : { \"webId\" : \"string\" , \"podName\" : \"string\" , \"template\" : \"string\" }, \"controls\" : { \"register\" : \"http://localhost:3000/idp/register/\" , \"index\" : \"http://localhost:3000/idp/\" , \"prompt\" : \"http://localhost:3000/idp/prompt/\" , \"login\" : \"http://localhost:3000/idp/login/\" , \"forgotPassword\" : \"http://localhost:3000/idp/forgotpassword/\" }, \"apiVersion\" : \"0.3\" } The required and optional fields indicate which input fields are expected by the API. These correspond to the fields of the HTML registration page. To register a user, you can do a POST request with a JSON body containing the correct fields: { \"email\" : \"test@example.com\" , \"password\" : \"secret\" , \"confirmPassword\" : \"secret\" , \"createWebId\" : true , \"register\" : true , \"createPod\" : true , \"rootPod\" : false , \"podName\" : \"test\" } Two fields here that are not covered on the HTML page above are rootPod and template . rootPod tells the server to put the pod in the root of the server instead of a location based on the podName . By default the server will reject requests where this is true , except during setup. template is only used by servers running the config/dynamic.json configuration, which is a very custom setup where every pod can have a different Components.js configuration, so this value can usually be ignored.","title":"JSON API"},{"location":"identity-provider/#idp-configuration","text":"The above descriptions cover server behaviour with most default configurations, but just like any other feature, there are several features that can be changed through the imports in your configuration file. All available options can be found in the config/identity/ folder . Below we go a bit deeper into the available options","title":"IDP configuration"},{"location":"identity-provider/#access","text":"The access option allows you to set authorization restrictions on the IDP API when enabled, similar to how authorization works on the LDP requests on the server. For example, if the server uses WebACL as authorization scheme, you can put a .acl resource in the /idp/register/ container to restrict who is allowed to access the registration API. Note that for everything to work there needs to be a .acl resource in /idp/ when using WebACL so resources can be accessed as usual when the server starts up. Make sure you change the permissions on /idp/.acl so not everyone can modify those. All of the above is only relevant if you use the restricted.json setting for this import. When you use public.json the API will simply always be accessible by everyone.","title":"access"},{"location":"identity-provider/#email","text":"In case you want users to be able to reset their password when they forget it, you will need to tell the server which email server to use to send reset mails. example.json contains an example of what this looks like, which you will need to copy over to your base configuration and then remove the config/identity/email import.","title":"email"},{"location":"identity-provider/#handler","text":"There is only one option here. This import contains all the core components necessary to make the IDP work. In case you need to make some changes to core IDP settings, this is where you would have to look.","title":"handler"},{"location":"identity-provider/#pod_1","text":"The pod options determines how pods are created. static.json is the expected pod behaviour as described above. dynamic.json is an experimental feature that allows users to have a custom Components.js configuration for their own pod. When using such a setup, a JSON file will be written containing all the information of the user pods so they can be recreated when the server restarts.","title":"pod"},{"location":"identity-provider/#registration","text":"This setting allows you to enable/disable registration on the server. Disabling registration here does not disable registration during setup, meaning you can still use this server as an IDP with the account created there.","title":"registration"},{"location":"making-changes/","text":"Pull requests \u00b6 The community server is fully written in Typescript . All changes should be done through pull requests . We recommend first discussing a possible solution in the relevant issue to reduce the amount of changes that will be requested. In case any of your changes are breaking, make sure you target the next major branch ( versions/x.0.0 ) instead of the main branch. Breaking changes include: changing interface/class signatures, potentially breaking external custom configurations, and breaking how internal data is stored. In case of doubt you probably want to target the next major branch. We make use of Conventional Commits . Don't forget to update the release notes when adding new major features. Also update any relevant documentation in case this is needed. When making changes to a pull request, we prefer to update the existing commits with a rebase instead of appending new commits, this way the PR can be rebased directly onto the target branch instead of needing to be squashed. There are strict requirements from the linter and the test coverage before a PR is valid. These are configured to run automatically when trying to commit to git. Although there are no tests for it (yet), we strongly advice documenting with TSdoc . If a list of entries is alphabetically sorted, such as index.ts , make sure it stays that way.","title":"Pull requests"},{"location":"making-changes/#pull-requests","text":"The community server is fully written in Typescript . All changes should be done through pull requests . We recommend first discussing a possible solution in the relevant issue to reduce the amount of changes that will be requested. In case any of your changes are breaking, make sure you target the next major branch ( versions/x.0.0 ) instead of the main branch. Breaking changes include: changing interface/class signatures, potentially breaking external custom configurations, and breaking how internal data is stored. In case of doubt you probably want to target the next major branch. We make use of Conventional Commits . Don't forget to update the release notes when adding new major features. Also update any relevant documentation in case this is needed. When making changes to a pull request, we prefer to update the existing commits with a rebase instead of appending new commits, this way the PR can be rebased directly onto the target branch instead of needing to be squashed. There are strict requirements from the linter and the test coverage before a PR is valid. These are configured to run automatically when trying to commit to git. Although there are no tests for it (yet), we strongly advice documenting with TSdoc . If a list of entries is alphabetically sorted, such as index.ts , make sure it stays that way.","title":"Pull requests"},{"location":"release/","text":"Releasing a new version \u00b6 This is only relevant if you are a developer with push access responsible for doing a new release. Steps to follow: Merge main into versions/x.0.0 . Verify if there are issues when upgrading an existing installation to the new version. Can the data still be accessed? Does authentication still work? Is there an issue upgrading any of the dependent repositories (see below for links)? None of the above has to be blocking per se, but should be noted in the release notes if relevant. Verify that the RELEASE_NOTES.md are correct. npm run release -- -r major or npx standard-version -r major Automatically updates Components.js references to the new version. Committed with chore(release): Update configs to vx.0.0 . Updates the package.json , generate a tag, and generate the new entries in CHANGELOG.md . Commited with chore(release): Release version vx0.0 of the npm package You can always add --dry-run to the above command to preview the commands that will be run and the changes to CHANGELOG.md . Manually edit the CHANGELOG.md . All entries are added in separate sections of the new release according to their commit prefixes. Re-organize the entries accordingly, referencing previous releases. Most of the entries in Chores and Documentation can be removed. Make sure there are 2 newlines between this and the previous section. git add CHANGELOG.md && git commit --amend --no-edit --no-verify to add manual changes to the release commit. git push --follow-tags Merge versions/x.0.0 into main and push. Do a GitHub release. npm publish Check if there is a next tag that needs to be replaced. Rename the versions/x.0.0 branch to the next version. Update .github/workflows/schedule.yml and .github/dependabot.yml to point at the new branch. Potentially upgrade dependent repositories: Recipes at https://github.com/CommunitySolidServer/recipes/ Tutorials at https://github.com/CommunitySolidServer/tutorials/ Changes when doing a pre-release of a major version: Version with npm release -- -r major --pre-release alpha Do not merge versions/x.0.0 into main . Publish with npm publish --tag next . Do not update the branch or anything related.","title":"Releases"},{"location":"release/#releasing-a-new-version","text":"This is only relevant if you are a developer with push access responsible for doing a new release. Steps to follow: Merge main into versions/x.0.0 . Verify if there are issues when upgrading an existing installation to the new version. Can the data still be accessed? Does authentication still work? Is there an issue upgrading any of the dependent repositories (see below for links)? None of the above has to be blocking per se, but should be noted in the release notes if relevant. Verify that the RELEASE_NOTES.md are correct. npm run release -- -r major or npx standard-version -r major Automatically updates Components.js references to the new version. Committed with chore(release): Update configs to vx.0.0 . Updates the package.json , generate a tag, and generate the new entries in CHANGELOG.md . Commited with chore(release): Release version vx0.0 of the npm package You can always add --dry-run to the above command to preview the commands that will be run and the changes to CHANGELOG.md . Manually edit the CHANGELOG.md . All entries are added in separate sections of the new release according to their commit prefixes. Re-organize the entries accordingly, referencing previous releases. Most of the entries in Chores and Documentation can be removed. Make sure there are 2 newlines between this and the previous section. git add CHANGELOG.md && git commit --amend --no-edit --no-verify to add manual changes to the release commit. git push --follow-tags Merge versions/x.0.0 into main and push. Do a GitHub release. npm publish Check if there is a next tag that needs to be replaced. Rename the versions/x.0.0 branch to the next version. Update .github/workflows/schedule.yml and .github/dependabot.yml to point at the new branch. Potentially upgrade dependent repositories: Recipes at https://github.com/CommunitySolidServer/recipes/ Tutorials at https://github.com/CommunitySolidServer/tutorials/ Changes when doing a pre-release of a major version: Version with npm release -- -r major --pre-release alpha Do not merge versions/x.0.0 into main . Publish with npm publish --tag next . Do not update the branch or anything related.","title":"Releasing a new version"},{"location":"resource-store/","text":"Resource store \u00b6 Once an LDP request passes authorization, it will be passed to the ResourceStore . The interface of a ResourceStore is mostly a 1-to-1 mapping of the HTTP methods: GET: getRepresentation PUT: setRepresentation POST: addResource DELETE: deleteResource PATCH: modifyResource The corresponding OperationHandler of the relevant method is responsible for calling the correct ResourceStore function. In practice, the community server has multiple resource stores chained together, each handling a specific part of the request and then calling the next store in the chain. The default configurations come with the following stores: MonitoringStore IndexRepresentationStore LockingResourceStore PatchingStore RepresentationConvertingStore DataAccessorBasedStore This chain can be seen in the configuration part in config/storage/middleware/default.json and all the entries in config/storage/backend . MonitoringStore \u00b6 This store emits the events that are necessary to emit notifications when resources change. IndexRepresentationStore \u00b6 When doing a GET request on a container /container/ , this container returns the contents of /container/index.html instead if HTML is the preferred response type. All these values are the defaults and can be configured for other resources and media types. LockingResourceStore \u00b6 To prevent data corruption, the server locks resources when being targeted by a request. Locks are only released when an operation is completely finished, in the case of read operations this means the entire data stream is read, and in the case of write operations this happens when all relevant data is written. The default lock that is used is a readers-writer lock. This allows simultaneous read requests on the same resource, but only while no write request is in progress. PatchingStore \u00b6 PATCH operations in Solid apply certain transformations on the target resource, which makes them more complicated than only reading or writing data since it involves both. The PatchingStore provides a generic solution for backends that do not implement the modifyResource function so new backends can be added more easily. In case the next store in the chain does not support PATCH, the PatchingStore will GET the data from the next store, apply the transformation on that data, and then PUT it back to the store. RepresentationConvertingStore \u00b6 This store handles everything related to content negotiation. In case the resulting data of a GET request does not match the preferences of a request, it will be converted here. Similarly, if incoming data does not match the type expected by the store, the SPARQL backend only accepts triples for example, that is also handled here DataAccessorBasedStore \u00b6 Large parts of the requirements of the Solid protocol specification are resolved by the DataAccessorBasedStore : POST only working on containers, DELETE not working on non-empty containers, generating ldp:contains triples for containers, etc. Most of this behaviour is independent of how the data is stored which is why it can be generalized here. The store's name comes from the fact that it makes use of DataAccessor s to handle the read/write of resources. A DataAccessor is a simple interface that only focuses on handling the data. It does not concern itself with any of the necessary Solid checks as it assumes those have already been made. This means that if a storage method needs to be supported, only a new DataAccessor needs to be made, after which it can be plugged into the rest of the server.","title":"Resource store"},{"location":"resource-store/#resource-store","text":"Once an LDP request passes authorization, it will be passed to the ResourceStore . The interface of a ResourceStore is mostly a 1-to-1 mapping of the HTTP methods: GET: getRepresentation PUT: setRepresentation POST: addResource DELETE: deleteResource PATCH: modifyResource The corresponding OperationHandler of the relevant method is responsible for calling the correct ResourceStore function. In practice, the community server has multiple resource stores chained together, each handling a specific part of the request and then calling the next store in the chain. The default configurations come with the following stores: MonitoringStore IndexRepresentationStore LockingResourceStore PatchingStore RepresentationConvertingStore DataAccessorBasedStore This chain can be seen in the configuration part in config/storage/middleware/default.json and all the entries in config/storage/backend .","title":"Resource store"},{"location":"resource-store/#monitoringstore","text":"This store emits the events that are necessary to emit notifications when resources change.","title":"MonitoringStore"},{"location":"resource-store/#indexrepresentationstore","text":"When doing a GET request on a container /container/ , this container returns the contents of /container/index.html instead if HTML is the preferred response type. All these values are the defaults and can be configured for other resources and media types.","title":"IndexRepresentationStore"},{"location":"resource-store/#lockingresourcestore","text":"To prevent data corruption, the server locks resources when being targeted by a request. Locks are only released when an operation is completely finished, in the case of read operations this means the entire data stream is read, and in the case of write operations this happens when all relevant data is written. The default lock that is used is a readers-writer lock. This allows simultaneous read requests on the same resource, but only while no write request is in progress.","title":"LockingResourceStore"},{"location":"resource-store/#patchingstore","text":"PATCH operations in Solid apply certain transformations on the target resource, which makes them more complicated than only reading or writing data since it involves both. The PatchingStore provides a generic solution for backends that do not implement the modifyResource function so new backends can be added more easily. In case the next store in the chain does not support PATCH, the PatchingStore will GET the data from the next store, apply the transformation on that data, and then PUT it back to the store.","title":"PatchingStore"},{"location":"resource-store/#representationconvertingstore","text":"This store handles everything related to content negotiation. In case the resulting data of a GET request does not match the preferences of a request, it will be converted here. Similarly, if incoming data does not match the type expected by the store, the SPARQL backend only accepts triples for example, that is also handled here","title":"RepresentationConvertingStore"},{"location":"resource-store/#dataaccessorbasedstore","text":"Large parts of the requirements of the Solid protocol specification are resolved by the DataAccessorBasedStore : POST only working on containers, DELETE not working on non-empty containers, generating ldp:contains triples for containers, etc. Most of this behaviour is independent of how the data is stored which is why it can be generalized here. The store's name comes from the fact that it makes use of DataAccessor s to handle the read/write of resources. A DataAccessor is a simple interface that only focuses on handling the data. It does not concern itself with any of the necessary Solid checks as it assumes those have already been made. This means that if a storage method needs to be supported, only a new DataAccessor needs to be made, after which it can be plugged into the rest of the server.","title":"DataAccessorBasedStore"},{"location":"seeding-pods/","text":"How to seed Accounts and Pods \u00b6 If you need to seed accounts and pods, the --seededPodConfigJson command line option can be used with as value the path to a JSON file containing configurations for every required pod. The file needs to contain an array of JSON objects, with each object containing at least a podName , email , and password field. For example: [ { \"podName\" : \"example\" , \"email\" : \"hello@example.com\" , \"password\" : \"abc123\" } ] You may optionally specify other parameters as described in the Identity Provider documentation . For example, to set up a pod without registering the generated WebID with the Identity Provider: [ { \"podName\" : \"example\" , \"email\" : \"hello@example.com\" , \"password\" : \"abc123\" , \"webId\" : \"https://id.inrupt.com/example\" , \"register\" : false } ] This feature cannot be used to register pods with pre-existing WebIDs, which requires an interactive validation step.","title":"Seeding Pods"},{"location":"seeding-pods/#how-to-seed-accounts-and-pods","text":"If you need to seed accounts and pods, the --seededPodConfigJson command line option can be used with as value the path to a JSON file containing configurations for every required pod. The file needs to contain an array of JSON objects, with each object containing at least a podName , email , and password field. For example: [ { \"podName\" : \"example\" , \"email\" : \"hello@example.com\" , \"password\" : \"abc123\" } ] You may optionally specify other parameters as described in the Identity Provider documentation . For example, to set up a pod without registering the generated WebID with the Identity Provider: [ { \"podName\" : \"example\" , \"email\" : \"hello@example.com\" , \"password\" : \"abc123\" , \"webId\" : \"https://id.inrupt.com/example\" , \"register\" : false } ] This feature cannot be used to register pods with pre-existing WebIDs, which requires an interactive validation step.","title":"How to seed Accounts and Pods"}]} |