Often, web services require the user to create content. Votes on content, ratings, comments, and user-submitted stories are good examples of this. The JSON:API module supports the creation of entities by sending data in POST requests.
In this tutorial we will:
- Add an appropriate set of HTTP headers to a request that generates a new entity
- Construct a JSON object for the entity we want to create
- Issue a POST request that creates a new article node in Drupal
By the end of this tutorial you should be able to create a POST request that creates a new entity of any type via the JSON:API.
Sometimes unexpected things happen and Drupal needs to generate an error. The JSON:API specification describes how the server should return those errors. Understanding what to expect allows consumers to plan for errors and react gracefully.
In this tutorial we will:
- Discuss how HTTP errors are used in conjunction with JSON:API
- Learn about how JSON:API embeds information about the error encountered into the response object
By the end of this tutorial, you should have a basic understanding of the types of errors you can expect to receive when making JSON:API requests and what you can do to handle them.
Collections are a very powerful feature because they allow us to access multiple items at the same time. However, in many situations we do not want to access all the entities of a given type, but only the ones that meet some specific criteria. In order to reduce the set of entities in the collection to the ones we care about, we use filters.
In this tutorial we will:
- Look at the
filter
query string parameter and how it can be used with JSON:API collections - Learn how to use filters in combination with the JSON:API module for Drupal to reduce the list of entities in a collection
By the end of this tutorial you should be able to request a list of entities in the form of a JSON:API collection and filter that list to include only the entities that match a specific set of requirements.
Embedding resources at the consumer's demand is one of the crucial features of a modern API. We mentioned in Modern Web Services with JSON:API and GraphQL that multiple round trips to the server is harmful for performance. This issue can be overcome by making a request that embeds any required related resources into the response for the resource we're retrieving.
In this tutorial, we'll learn how to use JSON:API's include
parameter to embed resources in a response.
By the end of this tutorial, you should be able to make a single request that retrieves multiple embeded resources in order to improve the performance of your application when interacting with a JSON:API server.
The JSON:API module is our recommended starting point for creating REST APIs with Drupal. JSON:API module is now part of Drupal core as of 8.7, so installing the module no longer requires a separate download step.
In this tutorial we'll:
- Walk through installing the JSON:API module for Drupal
- Look at what you get out of the box with the JSON:API module
By the end of this tutorial you should be able to install the JSON:API module, and know what tools it provides you with.
Includes and filters are really powerful features. When combined together you can achieve almost any query your consumer application needs. Fancy filters we mentioned in a previous tutorial allow us to filter a collection based on fields of related entities, in addition to the fields directly under that entity.
In this tutorial we will:
- Learn about filtering based on data in related resources
- Filter based on multiple conditions and multi-value fields
- Demonstrate how to filter a collection of articles based on author or tags
By the end of this tutorial you should be able to use nested filters in conjunction with relationships to further refine the list of content returned in a JSON:API collection.
Drupal allows for a rich data model where entity reference fields can be used to relate any number of different items together in different ways. The data models that you can build with Drupal are often prolific in relationships, which means we need a way to handle these in our API. While Drupal treats a field with a string, and a field with an entity reference the same, JSON:API distinguishes between attributes and relationships.
In this tutorial we'll:
- Look at how JSON:API represents relationships between two or more resources
- How to distinguish between an attribute and a relationship in a response object
- Learn about what information is available for each relationship and how we can use it
By the end of this tutorial, you should have a better understanding of how the JSON:API specification represents relationships modeled using Drupal entity reference fields.
Occasionally we need to remove entities from the backend using the API. REST APIs, and in particular JSON:API, use the HTTP DELETE method to accomplish this.
In this tutorial we'll create a request for deleting a single entity. By the end of this tutorial you should be able to issue requests that can delete any entity via JSON:API.
Being able to retrieve resources from an API is a fundamental first step.
In this tutorial we will learn how to:
- Issue an HTTP request to extract information about a node from the JSON:API server
- Examine the response from the server
By the end of this tutorial you should know how to use an HTTP Get request to return a resource from the JSON:API server, and know what the default response for the resource will contain.
When you enable the JSON:API module you're significantly increasing the attack surface of your application. So it's a good idea to make sure that you understand the implications of doing so, and how to mitigate potential security issues. In most cases it doesn't require much work to do, but it's worth taking the time to make sure you've done it right.
In this tutorial we'll learn:
- What JSON:API already does to keep you secure
- How to protect against common attacks
- How to limit access to resources exposed by JSON:API
By the end of this tutorial you should know what to look for when auditing your JSON:API configuration to help prevent against common attacks.
By default, the JSON:API returns all the available data for an object in its response. Using JSON:API sparse fieldsets you can increase the performance of your consumer application by reducing the fields in the returned response object to just those that you need.
In this tutorial, we will learn how to reduce the output to get exactly the information that we need from the API.
This is one of the most important features of modern APIs like JSON:API.
By the end of this tutorial, you'll know what sparse fieldsets are, the role they fulfill, and how to use them when requesting data from a JSON:API server.
Whenever we need our consumer application to change the contents of an entity we will need to issue a PATCH request. The JSON:API module will process that request and update the entity with the provided values.
In this tutorial, we'll:
- Define the appropriate HTTP headers for a PATCH request
- Construct the JSON object used to update an entity
- Issue a PATCH request that will update an entity in our Drupal backend
By the end of this tutorial, you should know how to update content via the JSON:API.
In the last several years REST has been the de facto standard for web services. However, there are several common problems when developing websites and digital experiences with the traditional REST implementations. Luckily, those issues have solutions, either complete or partial.
Modern web service specifications like JSON:API and GraphQL implement those solutions, although they differ slightly in their implementation.
In this tutorial we will learn about:
- The problems of traditional REST implementations
- Common solutions to those problems, and how JSON:API and GraphQL deal with them
By the end of this tutorial you'll be able to explain some of the common pitfalls of REST-based APIs, and how JSON:API and GraphQL address those issues.
The term web services has been around for quite a while. Given that web services is such a broad topic, let's define what web services are and how we are going to refer to them throughout this series so we are all on the same page.
This tutorial is an introduction to web services that will help you:
- Learn what a web service is.
- Understand that this series focuses on HTTP web services, and mostly on REST principles.
- Get some examples of APIs in the wild and what type of consumers they have.
By the end of this tutorial, you'll be able to define what web services are, and how we'll use the term in the context of these tutorials.
When running migrations you can use the highwater_mark
source plugin configuration option to influence which rows are considered for import on subsequent migration runs. This allows you to do things like only look at new rows added to a large dataset. Or to reimport records that have changed since the last time the migration was run. The term, highwater mark, comes from water line marks found on structures in areas where water level changes are common. In running migrations, you can think of a highwater mark as a line that denotes how far the migration has progressed, and saying, "from now one, we only care about data created after this line".
Another common use case for highwater marks is when you're importing a large dataset and the system runs out of resources. Usually this will look like a migration failing because it timed out, or the process ran out of memory. A highwater mark should allow you to pickup from where you left off.
In this tutorial we'll:
- Define what a highwater mark is, and how you can use them to limit the rows considered for importing each time a migration is executed.
- Demonstrate how highwater marks can be used to reimport source records that have been modified since the previous time the migration was executed.
- Introduce the
track_changes
option.
By the end of this tutorial you should be able to define what a highwater_mark
is and how to use them to speed up the import of large datasets or force the migration to reimport records when the source data is changed.
You should also be aware of the track_changes
feature which is a slower, but more dynamic, method of checking for changes in source data and reimporting records when a change is found.
If you need to write a migration that is capable of being executed multiple times and picking up changes to previously imported data, you can use the track_changes
configuration option of most source plugins. This will tell the migration to keep a hash of each source row, and on subsequent runs of the same migration it will compare the previous hash to the current one. If they don't match this means the source data has changed, and Drupal will reimport that record and update the existing destination record with new data.
Using track_changes
differs from calling drush migrate:import --update
in that using --update
will force every record to be re-imported regardless of whether the source data has changed or not.
In this tutorial we'll:
- Learn how change tracking works to detect changes in your source data
- Use the
track_changes
option in a migration
Note that progress of a migration can also be tracked using highwater marks if the source data has something like a last_updated
timestamp column. Using highwater marks in this case is likely more efficient than using track_changes
. It's a good idea to understand how both features work, and then choose the appropriate one for each migration.
Before you can migrate your Drupal 7 site to the latest version of Drupal you'll need to be able to build the features that make up the current site. Part of this is evaluating all the modules you've got installed, figuring out what you're using them for, and if there's a version that's compatible with the latest version of Drupal along with a migration path.
I usually make a spreadsheet for this. But any list of the modules you’re currently using that allows you to keep track of how you plan to update them will work. You also want to keep track of where you are in the process of figuring that all out. Because it’s likely you’ll have some modules for which the path is clear, and others where it’s pretty murky and requires more in-depth research to find a path forward. Having a list means you can break that up into tasks, and ensure you’re not missing something. It'll also help you define when your migration is done as well as any final quality assurance (QA) tasks.
In this tutorial we'll:
- Start a list of the modules that make up our current site.
- Point to some tools that can help speed up the process of evaluating a module's readiness.
- Provide a set of questions that you can ask about each module you're using as part of your planning process.
By the end of this tutorial you should have a list of all the modules you're currently using, and some tools you can use to help you figure out how to move forward with each one.
If you want to be able to say you're done with your Drupal-to-Drupal migration, first you have to be able to define "done". And part of that is doing a content analysis and inventory. Performing a content analysis and inventory will help you ensure that you don't miss any fields or important records. It also gives you an opportunity to spend some time thinking about the overall information architecture of your site. You're already going to be doing a lot of work to migrate your content, so deciding to shuffle things a bit now might not add any significant extra time. Additionally, the latest Drupal (as of Drupal 10) is a different platform than either Drupal 6 or 7, and as such, there are some new best practices and new ways of doing things that might not have been available before.
In this tutorial we'll:
- Provide a set of questions you can ask yourself about the content of your current site to kick-start your analysis.
- Give an example of how we create a content type inventory in a spreadsheet and use that to help define "done" for our projects.
By the end of this tutorial you should be able to get started analyzing the content and content types of your existing site in order to start planning for your Drupal-to-Drupal migration.
Because current versions of Drupal are so much different from older versions like Drupal 7 upgrading to the latest version requires creating a new site using the latest version of Drupal and then migrating your old site's content and configuration into it.
There is no one right way to tackle a Drupal-to-Drupal migration. Instead, it's like walking down a path and coming to a fork in the trail and then choosing a direction over and over and over. Since every site is different, every path to a finished migration will be different, too. I know nobody wants to hear it, but every migration is its own unique adventure. Every successful migration will require its own custom code, weird shell scripts, and detailed lists of the exact order of things. But that doesn't mean there's no plan to follow.
A successful migration charts a course through the maze, while leveraging existing tools and experience to help find the shortest route and the right path at each fork--with minimal backtracking due to wrong choices.
In this tutorial we'll:
- Look at what makes a Drupal-to-Drupal migration (a major version upgrade) so tricky, and how to think about performing one
- Define 3 high-level approaches to performing a Drupal-to-Drupal migration
- Get a better idea of what the migration work entails, so you go into it with a proper mental model.
By the end of this tutorial you should be able to explain the different approaches to performing a Drupal-to-Drupal migration in broad strokes, and have a better picture of the work that will be involved.
The migration_lookup
process plugin allows you to populate entity reference fields when the entity that you want to reference is created by another migration, and you don't yet know what the unique ID will be in the Drupal database. Think of this as being able to answer the question: I have a migration that imports new users from a CSV file, and another that imports articles from a different CSV file. The articles' CSV file has a column that indicates which row from the users' CSV file is the author of this file. But in order to populate the author field for the Drupal node I need to know the ID of the imported user which is an auto-incremented ID that I'm not setting. So, what is the correct ID to use?
Another use case is circular dependencies, like hierarchical taxonomies with parent/child relationships. Or a content type with a field that references other nodes of the same type, like articles that reference other articles.
In this tutorial we'll:
- Learn about the
migration_lookup
process plugin and its configuration - Provide examples of using the plugin to populate entity reference fields during a migration
- Look at alternatives
By the end of this tutorial you should be able to use the migration_lookup
plugin to effectively manage entity relationships across multiple individual migrations.