There's no magical set of right tools to use to monitor a Drupal site's performance and health. While thinking about performance monitoring, you need to optimize your approach depending on the number of applications you manage, their complexity, business needs, and the skill-set of your team. Based on these factors, you may choose to use one of the core or contributed modules, go with third-party solutions and services, or some combination of both.
Drupal core comes with a couple of modules that allow you to monitor the health and performance of the site including Syslog, Database Logging, and the status reports provided by the System module. There are also numerous community-contributed modules, a sampling of which we'll cover here.
In this tutorial, we'll:
- List some contributed modules that are commonly used for monitoring a Drupal site
- Provide an overview of what each module does
By the end of this tutorial you should be able to list a few contributed modules that might be useful for monitoring your Drupal application and define what each one does.
New Relic is a monitoring service that provides insights into your application stack from front-end performance to the server and infrastructure metrics. New Relic uses a combination of aggregating server logs, and pre-built (or custom) monitors to track the metrics that are most important to your application. The collected data can be organized into custom dashboards, and alerts can be set up and issued per customizable conditions.
In this tutorial, we'll:
- Learn about different New Relic modules and their purpose
- Review some default dashboard components and reports
- Discuss how to use the information in New Relic to understand the health of your Drupal application
By the end of this tutorial, you should understand the basics of using New Relic and the insights it offers to monitor and improve the performance of your Drupal site.
Performance profiling allows you to see an overview of how your Drupal application stacks up against your users' needs and business requirements. A good profile will help you understand where the performance bottlenecks are and where you should focus your efforts in order to achieve the best results when optimizing your application.
There are many profiling tools available to help you analyze your Drupal site's performance. Some are free -- like the browser’s built-in development tools, the Lighthouse Chrome extension, and XHProf. Some are paid -- like New Relic, Blackfire, and other profiling SaaS solutions.
In this tutorial we'll:
- Outline the general concepts and goals of performance profiling
- List some available profiling tools and their features
By the end of this tutorial you should be able to describe what performance profiling is, and list the tools commonly used to establish a performance profile for a Drupal site.
When your site is experiencing performance issues, one way to pinpoint the cause is to use profiling tools. Before you can fix the issue you have to be able to identify what's causing it. All profiling tools do roughly the same thing: they tell you what code is called during the request and how much time is spent executing it. This helps to identify the slowest code and dig deeper into the cause. Once the cause is determined you can start figuring out how to optimize the code.
For this tutorial, we’ll use New Relic as a profiling tool, but you can apply a similar methodology using the profiling tool of your choice.
In this tutorial, we'll:
- Learn how to identify and analyze slow transactions
- Look at common things to check for while profiling
- Cover some questions you should ask when looking at profiling data to help track down the slow code
By the end of this tutorial, you should know how to profile a Drupal site (specifically with New Relic) to find performance bottlenecks.
Drupal site performance relies heavily on caching. Optimal caching (and invalidation) requires that each page is rendered with the correct cacheable metadata. This metadata allows for intelligent caching -- but when something isn't working correctly, it can be tricky to figure out where exactly the metadata was generated from.
When debugging Drupal cache issues, you're usually trying to answer 1 of 2 primary questions:
- Why is this cached? If the information gets stale, why isn’t it updated?
- Why is this not cached? And why is our cache hit rate low?
The Drupal cache system consists of many layers, each of which may contribute to the problem. This tutorial focuses on debugging the Drupal application cache layer, and strategies for debugging Varnish. Given that most external to Drupal layers rely on the use of HTTP headers for caching, you should be able to use similar techniques to those used for debugging Varnish.
In this tutorial, we'll:
- Learn strategies for debugging the Drupal application cache and render cache
- Share strategies for debugging low hit rates when using Varnish
By the end of this tutorial, you should know how to enable and use various cache debugging mechanisms in Drupal to help identify problems in your site performance and resolve them.
Server scaling is the process of adding more resources (CPU, memory, disk space, etc.) to a server (or servers) to help with performance. This might be a single server, or a cluster of different machines. When we talk about server scaling, think more about the resources and less about the specific hardware. Modern servers may not always resemble a physical machine that you can open up and insert additional RAM into. But the theory is the same: more memory means the server can handle more concurrent requests.
Sometimes your Drupal site is optimized, but traffic is still high and takes most of the server’s resources. In order to sustain that load you'll need to scale your server up.
Sometimes you don’t have the resources or expertise to implement caching optimizations, refactoring code, and modifying slow queries -- all of which would improve Drupal's performance. In these cases, you may consider server scaling.
Server scaling can be done in two ways: horizontally or vertically.
In this tutorial we'll:
- Learn what server scaling is
- Discuss examples of both horizontal and vertical server scaling
- Talk about when to choose horizontal versus vertical scaling strategies
By the end of this tutorial, you should understand the concept of server scaling and how it applies to a Drupal application.
No one likes to wait for a slow site to load. Not me, not you, and definitely not search engines. And the effect of site load times on things like SEO, user bounce rates, purchase intent, and overall satisfaction are only going to become more pronounced over time.
Drupal is a modern web framework that is capable of serving millions of users. But every site is unique, and while Drupal tries hard to be fast out of the box, you'll need to develop a performance profile, caching strategy, and scaling plan that are specific to your use case in order to be truly blazing fast.
Drupal site performance depends on multiple components, from hardware setup and caching system configuration to contributed modules, front-end page weight, and CDNs. Experienced Drupal developers looking to optimize their applications know where to start looking for potential savings. They can manipulate settings and combinations of these components to achieve the desired results. Our goal with this set of tutorials is to help explain the process and provide you with the insight that comes with experience.
In this tutorial we'll:
- Introduce high-level performance concepts for Drupal that we'll then cover in more detail elsewhere
- Provide an overview of the main Drupal performance components.
By the end of this tutorial, you should understand what components around your Drupal application are responsible for site performance.
Why Solr?
FreeDrupal has long provided a built-in search mechanism, so why do we need anything more? In this tutorial, we introduce Apache Solr, a free and open source search service that has several advantages and features beyond Drupal’s built-in search.
In this tutorial, we'll:
- Define Apache Solr
- Identify Apache Lucene, the legacy name for Solr
- List key features of Solr
- Identify the advantages of Solr compared to Drupal search
Apache Solr is not a Drupal module, but a server application like Varnish or MySQL. Before we can use Solr with Drupal, we must plan how we will deploy Solr to our production site.
In this tutorial, we'll:
- List the requirements for Solr installation
- Identify when to install Solr on new hardware
- Describe various installation methods
By the end of this tutorial you should be able to describe a typical Solr install, and begin to list out the various things you'll need to do to install Solr for your environments.
Use Solr Locally
FreeJust as you would for Drupal, you should always test your search configuration prior to deploying it to production. In this tutorial, we examine the various ways to set up Apache Solr locally on your system. Then we'll walk through setting up DDEV with Solr for local development.
In this tutorial, we'll:
- Describe options for running Solr locally
- List which popular local development environments provide Solr
- Show how to set up a local, DDEV-based local dev environment with Solr
By the end of this tutorial you should be able to list the different ways that Solr can be installed locally, choose an option that works for you, and see how to get started quickly with DDEV.
Solr compartmentalizes itself into cores (or collections if you're using SolrCloud). Each Solr core has its own directory, configuration, and set of search data. While a core can be thought of as an “index”, it is much more.
In this tutorial, we'll:
- Identify the difference between a Solr core and an index.
- List the various ways a core can be created.
- Explain why Search API needs a custom core configuration.
By the end of this tutorial you should be able to explain what Solr cores are, and how to create a Solr core (or collection) that is compatible with Drupal's Search API module.
In order for Drupal to work with Apache Solr, we need to add the Search API module. This module provides a generic interface for search backends, including Solr. Furthermore, it adds several features to search without the need for custom code.
In this tutorial, we'll:
- Describe why Search API is necessary to use Solr with Drupal
- Identify a Search API server
By the end of this tutorial you should be able to install the Search API and Search API Solr modules, and create the Search API server configuration required to connect Drupal and Solr.
When developing a Drupal site, it is best practice to maintain multiple environments: A production environment for your live site, a stage environment for “next version” development, and your local environment for debugging and creating new features. Solr adds further complexity as we should have a separate Solr server for each.
In this tutorial, we'll:
- Describe why different Solr servers should be used for each environment
- Explain why Config Split is not a solution for multiple environments
- Describe how to use config overrides for each environment
By the end of this tutorial you should be able to override your Search API server configuration with environment-specific settings.
While Solr compartmentalizes settings into cores, Search API organizes things into indexes. Each Search API index can have a unique set of settings and crawl a specified list of content types. Search API indexes can be created in the Search API admin interface.
In this tutorial, we'll:
- Identify a Search API index
- Describe how an index is related to a Solr core
By the end of this tutorial you should be able to create a new Search API Index connected to a Solr backend.
Creating an index alone is not enough. To populate the index, we need to specify the fields necessary to populate the index. Selecting the fields is accomplished in the Search API admin UI.
In this tutorial, we'll:
- Explain how to select what populates a search index
- Describe a field boost, and how it is used to customize results
By the end of this tutorial you should be able to add fields to a Search API Index so their content is available for searching, and then instruct Search API to index the content of your site.
Reference field types, such as taxonomy term fields, paragraph fields, or plain entity reference fields, refer to a completely separate entity within the site. This makes search configuration complicated as the typical scope of a search crawl is on a per-node (really a per-entity) basis. Fortunately there are known strategies to index these fields with ease.
In this tutorial, we'll:
- Describe why reference types pose a particular challenge to indexing
- Discuss the importance of display modes in indexing
- Highlight how the Rendered HTML Output field can be used to index paragraphs
By the end of this tutorial you should be able to add reference fields to your Search API Index and allow users to search their contents in the correct context.
One of Search API’s key advantages is that custom search pages can be created using Views. This allows a high degree of customization, while relying on a familiar toolset.
In this tutorial, we'll:
- Describe how to use Views to create a search page
- Explain search page best practices, including requiring input and no-results text
By the end of this tutorial you should be able to create a page that users can use to search your site's content using the Solr search backend.
Drupal has the ability to support multiple Search API indexes within a single installation. While adding a new index is easy, we must understand the implications of creating and using multiple Search API indexes.
In this tutorial, we'll:
- Identify when to create multiple indexes in Search API
- Define virtual indexes, and their performance implications
Processors allow you to augment your search indexes by performing additional operations before or after the index operation. This can make your search more flexible.
In this tutorial, we'll:
- Identify what a processor is, and when it can be employed in the search pipeline.
- List useful processors provided by Search API.
- Describe how to apply a processor to an index, and why reindexing is necessary.
Excerpts are brief snippets of text displayed in search results. They give context to how the search terms relate to the result. Search API provides support for excerpts out of the box.
In this tutorial, we'll:
- Identify how to apply excerpts to a search index
- Describe how to add excerpt display to the search view
By the end of this tutorial you should be able to use the Highlight search processor to add excerpts to search results.