Using Tail to Debug Drupal Sites
Blog postPodcast Episode 66: Project Management
Blog postIn this week's Drupalize.Me podcast, hostess Amber Matz chats about all things Project Management with Seth Brown (COO at Lullabot) and Lullabot Technical Project Managers Jessica Mokrzecki and Jerad Bitner. To continue the conversation, check out Drupalize.Me's series on Project Management featuring interviews and insights from these fine folks and others at Lullabot.
We are excited to introduce our latest series, Introduction to Project Management. This series is quite a bit different from our usual format. Instead of screencasts and walk-throughs, we've interviewed Lullabot's technical project managers, the Chief Operating Officer, and the Account Director, and organized the footage into a series of lessons on topics that project managers deal with on a regular basis, including contracts, communication, estimation, tools and methodologies.
In this lesson, we’ll hear from Lullabot project managers about what they think makes a good ticket tracking system, some helpful ways to organize tickets, and other features including tracking conversations, email notifications, and reporting features that many ticket software platforms can provide. We’ll also hear about a tool called a Gantt chart which can help a project manager answer questions such as “when the project will be finished”, “what the critical path is”, and “what are the dependencies in this project?”
Additional resources
- Gantt Charts (SmartSheet)
- Burn down chart
- zenhub.io for GitHub
- Agile for JIRA
- Trello.com (standalone w/some integrations)
- JIRA: Agile plugin has a nice tool for planning sprints
- GitHub: use Milestones
Welcome to a special series on Project Management from Drupalize.Me. This series differs from our usual format of screencasts and presentations. We interviewed Lullabot's technical project managers, the Chief Operating Officer, and the Account Director, and organized the footage into a series of lessons on topics that project managers (PMs) deal with on a regular basis, including contracts, communication, estimation, tools and methodologies.
Here's what we'll be covering in this Introduction to Project Management series:
- Project Management Methodologies: Learn about terminology and methodologies used in the world of software project management and how combining methodologies from different traditions can work effectively on service projects.
- Tools for Managing Projects: Learn about the kinds of tools that are used in software project management for tracking progress, reporting, and team and client communication.
- Traits of a Project Manager: What makes a great project manager? What characteristics, skills, and approaches are great to have in someone in the role technical project manager?
- Types of Services Contracts: Learn about the different types of services contracts; in particular, the three that Lullabot uses in their client engagements. As you will see, the type of contract can have different implications in a project manager's needed skills and approach.
- Estimation on Drupal Projects: Learn about the challenges of estimation, and insights into the what estimates should communicate and how they should illuminate the various degrees of risk and uncertainty in a project.
- Resourcing and Scheduling: Learn about the challenge of determining how many people are needed for a project, and what questions to ask when determining capacity.
- Putting Together Teams: Learn strategies for how to put together teams, especially for large projects.
- Being Human on Projects: Learn about the people skills that are important to have, ways to detect burnout, and how to help team members get back on track.
- Client Communication: Learn about the many facets of client communication.
- Managing Expectations: Learn strategies for aligning and managing client expectations from the perspective of sales and account management.
- Project Kick-Off Essentials: Learn about the essential elements of a successful project kick-off meeting.
- Problems, Risks, and Red Flags: Learn strategies for identifying and dealing with problems, risks, and red flags on a project.
- Quality Assurance (QA): Learn about the various kinds of QA that you can implement in your project.
- Demos and Retrospectives: Learn about demos and retrospectives, some things to consider in a prototyping process, and what you can learn from these activities.
- Launch and Celebration: Learn some tips for ensuring a successful launch and the importance of celebrating the accomplishments of the team.
Whether you are a developer-turned-project-manager or a seasoned veteran, we think you will find insight into the art and science of project management in this series.
In this lesson, you’ll learn about methodologies and techniques that are often used in project management. These methodologies provide a set of processes for a development team to utilize, and a framework that a project manager can use to structure a project’s tasks and progress. You’ll hear from project managers at Lullabot who explain terms such as Waterfall, Agile, Wagile or Consultancy Scrum, Kanban, Scrum, and Sprints and how a tailored combination of these techniques often leads to the best results.
Additional resources
Illustration Made Simple with Shapes
Blog postWhether you are a designer, or haven't doodled anything since you were bored in study hall, have you ever wondered what it takes to successfully illustrate something fun and compelling? After all, illustrations and graphics can be powerful visual tools used to enhance our content. Luckily, the answer is quite simple: all you need is creative problem solving, basic shapes, and a few tricks up your sleeve.
This week we wrap up our exploration of integrating Drupal with Apache Solr. We'll look at using facets for narrowing returned search results, and some additional Solr server configuration options for further refining our index.
One of the benefits of building our own search application is that we have ultimate control over the ranking of items. Combined with our superior knowledge of our own content we can use this to ensure that when someone searches for a specific keyword we bubble our best content for that term to the top of the list, regardless of whatever Solr might rank it based on its internal algorithms. This is commonly referred to as promoted, or sponsored, results; the artificial boosting of a particular document to the top of the result list for a specific query.
A similar, but not exactly the same, example would be sponsored results on Google searches, where you can pay to have your page listed at the top of the results for a specific keyword or set of keywords. We are going to be doing all of this except for the part where we let people pay to promote results, though you could certainly build that part on your own if you need that.
Solr uses a configuration file named elevate.xml, in conjunction with a processor, to elevate results at the time a query is performed. We can promote specific documents in our Solr index by figuring out the unique Solr ID for a document and then adding it to the elevate.xml file along with some information about a query, or queries, this document should be promoted for.
In this tutorial we'll learn how to find a Solr document's unique ID, and then configure Solr to use an elevate.xml file that will promote the "How to Use the Fish Finder" page to the top of the results when someone searches for the term "fish". This configuration is all within the Solr application itself and doesn't really rely on Drupal in anyway. As such, the material in this tutorial should be applicable to your Solr search applications even if you're not building them with Drupal.
By the end of this lesson you should be able to configure promoted documents in your own Solr-based search application.
Additional resources
Depending on the data that is being searched, some shorter general words, like "a", "the", or "is" can adversely effect search result relevancy. Consider the word "the", which in a standard description of a fish in our database could easily appear hundreds of times or more. When a search is performed, part of the algorithm that calculates the relevancy of any document in the index is to count the number of times a word appears in the text being searched. The more often it appears, the more relevant the document. Words like "the" however often have little to no real bearing on a document's actual relevancy. These words should instead be excluded from the ranking algorithm.
Stop words can also serve another purpose. You can filter out words that are so common in a particular set of data that the system can't handle them in a useful way. For example, consider the word "fish" in our dataset. It's probably very common. With only 500 fish being indexed it's not really going to make much difference, but what if we were indexing five million fish, and each one had the word "fish" in the description even just five times? That's 25 million occurrences of the word "fish". Eventually we might start to hit the upper limit of what Solr can handle. The word "fish" in this case is probably also not very useful in a search query. You're browsing a fish database. Are you really likely to search for the query fish and expect any meaningful results? Likely it would instead return every result. It would be like going to Drupal.org and searching for the word "drupal" and expecting to get something useful. Not going to happen.
Solr has the ability to read in a list of stop words, or words that should be ignored during indexing, so that those words do not clutter your index and are removed from influencing result relevancy. In this tutorial we'll take a look at configuring stop words for Solr.
First, we'll use the Solr web UI to see the most common terms in our index for the body field. Then, based on that list, and the list of common stop words provided by the Solr team, we'll configure our stopwords.txt file. Finally, we'll re-index all the content of our site so that it makes use of the new stop words configuration and re-examine the most common terms noting that our stop words no longer appear in the list.
By the end of this tutorial you should be able to use the Solr web UI to get a list of the most common terms in your index, and know how to add terms to Solr's stopwords.txt file to prevent them from showing up in your index.
Additional resources
Solr provides the option to configure synonyms for use during both indexing and querying of textual data. A synonym is a word or phrase that means exactly or nearly the same thing as another word or phrase in the same language. For example, shut is a synonym of close. Synonyms, if not accounted for, can cause a dilution of search result relevancy when searching for a keywords that have lots of variations in your index.
Consider for example the words, "ipod", "i-pod", and "i pod". It's pretty easy to imagine a scenario in which the content of our site could contain all three variations of the word. When someone searches though they are likely just going to search for one, but expect results for all three. In order to not break those expectations we need to make sure we account for this scenario. Another example from the the Drupal world would be the terms "CMI" and "configuration management". Chances are if you search for one you would be happy to see results for the other.
In this tutorial we'll look at using the synonyms.txt file that is part of our Solr configuration in order to account for synonyms in our data. Of course the exact words you use will depend on the content of your site, but we can at least cover how they work and how to configure them.
By the end of this tutorial you should be able to configure Solr to be aware of synonyms in your data in order to improve the quality of your search results.
Additional resources
Using facets allows users of your search application to further narrow the results returned from a keyword search by selecting one or more attributes of the returned content and saying either show me only these, or show me everything but these. In this tutorial we'll take a look at some examples of faceted searching in practice, and then we'll use the Facet API module to expose facets for our genus and species fields.
One of the most common uses of facets is on e-commerce sites like Zappos.com that have huge collections of products that users can browse through, and narrow down, to focus in on exactly the pair of shoes they are after. In this example facets allow you do to things like narrow the results returned from your initial keyword search to just shoes for men, which are brown, size 10.5, and on sale. You can can also see faceting in action any time you perform a search on our site.
We'll use facets to allow users of the fish finder application to limit the results returned to just those of a specific species or genus. In doing so we'll also look at the options available for determining how facets should be displayed, whether or not we should show a facet that has zero documents in our result set, and how to combine multiple facets together into a single query using either AND or OR logic.
By the end of this tutorial you should be able to use the Facet API module in conjunction with Search API in order to provide facets that your users can use to further narrow and refine their search results.
Additional resources
Create Offsite Backups with NodeSquirrel
Blog postIn our free Module Monday: Backup and Migrate tutorial we discussed all the benefits and features the module has to offer. In this tutorial I am going to extend on the functionality of the module because something great has happened in the Drupal world. Pantheon, a Drupal hosting provider, has purchased NodeSquirrel an offsite backup solution created by the makers of the Backup and Migrate module. What is so great about this is Pantheon is allowing free backups up to 5gb. This means there are no more excuses not to have an offsite backup of your Drupal database.
Podcast Episode 65: Web Accessibility
Blog postJoin Amber Matz as she chats with web accessibility aficionados Mike Gifford, Chris Albrecht, and Helena Zubkow about what web developers and Drupalistas can do to build more accessible web sites. How has web accessibility changed over the years? Who is being left behind? What are some common gotchas? What are some easy ways to get started testing for accessibility? All these questions and more are discussed in today's podcast. Don't forget to check out the links and resources in the show notes for all sorts of useful things mentioned in our discussion.
Monthly Update, May 2015
Blog postIt's that time again! Our team worked hard last month to bring new content and site features to our members. Here's a overview of what we accomplished.
This week we continue exploring the Search API module and use it to display search results from Solr in Drupal. As well as looking at additional configuration options for our Search API index.
There are a couple of configuration options available when configuring a Search API index that we haven't looked at yet: adding additional fields, and using boost values to increase the relevance of a keyword when found in a specific field.
Solr allows you to index any number of additional fields, so we'll add a species and genus field to our index. This is one of the reasons using Search API to interface with Solr is so great. Through it's use of the Entity API, the Search API module has a deep understanding of all the content types on your site and the fields that are attached to them, without you having to write any code, or do anything other than configure things in the UI.
One of the benefits of creating your own search index is that you know your data better than anyone, and you know what people are hoping to find in your content. Solr allows you to configure a boosting value that can be used to increase the relevancy of keywords found depending on where in the data it's located. For example, when someone searches for a keyword we can probably assume that if the keyword is in the page title that the keyword is worth more relevancy points than if the keyword is found in the page body. With boosting we can affect the relevancy ranking of results and help our users more quickly find what they are looking for.
By the end of this tutorial you should be able to add additional fields to your Solr index so their content is available for searching, as well as assign a relevancy boosting value when keywords are found in specific fields.
The Search API module by itself doesn't provide a UI for submitting a search query, or a page for displaying results. Instead, it exposes an API that other modules can use to provide those features. This makes it super flexible, but it also means we've got some extra work to do in order to allow someone to actually perform a search and see the results.
In this tutorial we'll look at using the Search API Pages module to create a simple search page with a form at the top and a list of results ordered by relevancy. Search API Pages is the quickest and easiest way to replace the Drupal core search module's functionality with a form that uses Solr for a search backend instead of MySQL.
When creating a new page with the Search API Pages module we can choose the view mode that we would like to use for displaying results. It works very nicely with Drupal's built-in view modes, as well as contributed modules like Display Suite, in order to allow for a high level of customization of view modes, and thus of the displayed results.
You can also configure the query type to use, choosing from one of: multiple terms, single term, or direct query. For integration with Solr you'll likely want to choose direct query, and allow Solr to handle the query parsing since it has a lot of advanced options that go far beyond what Search API handles on its own. However, we'll look at the different query type configurations, and demonstrate things we can do with direct query searches and the powerful Solr query syntax that we can't do with the other modes.
Finally, we'll look at the block that Search API Pages provides, and use it to replace the search form on the home page of our site with a form that points to our new Search API Pages search results page.
By the end of this tutorial you should be able to expose a page on your site that will allow your visitors to perform a search using the Solr index and have the results displayed in Drupal.
Additional resources
The Search API module supports a handful of data alterations and processors; additional operations that can be performed on a document before it's indexed or during the display of search results. While Solr actually handles the majority of these for us already, this tutorial will look at the available options, talk about what each one does, and explain which ones are still relevant when using Solr as a backend.
Looking at data alterations in the Search API module also raises an important point about security. By default, Search API doesn't care about your content's access control settings. In order to prevent people from seeing results for their searches that contain data they shouldn't have access to we need to make sure we account for that in our configuration.
Here's a good list of the currently available data alterations and processors, though it's worth noting that not all of them are available for all search backends. Also, as we'll see, not all of them are recommended when using Solr even if they are available. Solr's tokenizer for example is much more full featured than the Search API tokenizer, so when using Solr as a backend it's best to keep the Search API tokenizer turned off and let Solr do its thing.
By the end of this lesson you should be able to use data alterations and processors to filter out specific content types from your Solr index and to highlight keywords found when displaying search results. You'll also be able to explain why some alterations and processors are better left off so that Solr can handle those tasks directly.