Drupal News

Forum One: Customizing SearchApiQuery Filters

Planet Drupal - Sat, 23/08/2014 - 6:34am

I had the opportunity to play with Search API filters to modify Solr searches lately as we had to implement a complex set of filtering rules for a Drupal project. This becomes necessary because Views filters don’t easily support our conditions, and you really don’t want to rely on the node access system because your goal is to alter the result set instead of forbidding access altogether. Using SearchApiQuery can be tricky at first, but with practice, one can get the hand of using it effectively.

First, the easiest way to modify these searches is to implement hook_search_api_query_alter() in one of your modules. Since we use Features heavily, we usually have a search focused Features module, and write any related code within the appropriate feature’s .module file. This hook has one parameter, which is the SearchApiQuery object itself.

/** * Implements hook_search_api_query_alter(). */ function my_module_search_api_query_alter($query) { }

Remember: we don’t need to pass objects by reference, since objects are always passed by reference, this means we also don’t need to return anything from this hook implementation. The SearchApiQuery object provides several operations (presented in its documentation) to modify its search query, but I’m just going to focus on two for today.

$filter = $query->createFilter($conjunction);

…where $conjunction is a string with a value of either an AND or OR condition. This creates a new SearchApiQueryFilter object that can be added back to the $query later, after we’ve added conditions to it.

$query->filter($filter);

Once we have a filter object, we can add conditions, or other filters to it:

$filter->condition('field_name', 'value'); $filter->filter($filter);

As any programmer knows, nesting conditions can be simplified into its basic parts. So the following statement

if ($this == $that && ($here > $there || $foo == $bar))

…can be broken down into five evaluations (working backwards allows us to resolve the child conditions before the parents):

1. $foo == $bar

2. $here > $there

3. Result of evaluation 1 || Result of evaluation 2

4. $this == $that

5. Result of evaluation 4 && Result of evaluation 3

Working with Search API filters is no different than constructing the above nested If statement.

First, we want to create a base filter for our evaluation. Here I am using an example that the node should be published, but this would typically be already added by your view or use of node access integration by checking the “Node access” option sat admin/config/search/search_api/index/[index name]/workflow.

$base_filter = $query->createFilter('AND'); $base_filter->condition('status', 1);

We use AND since that is what we outlined. To create additional sub filters, we draw from the same query object.

$subfilter = $query->createFilter('OR');

Conditions a straightforward. Just pass the field and the value to filter by. This defaults to == but you do get additional comparisons for integer values. Refer to the API link above for a complete list of options.

$subfilter->condition('field_my_field', 'value'); $subfilter->condition('field_other_field', 'value2');

Now to add this sub filter to the base filter, just pass it to filter().

$base_filter->filter($subfilter);

and then add the base filter back to the query object,

$query->filter($base_filter);

That’s all there is to working with the SearchApiQuery interface. The complete example looks something like this:

/** * Implements hook_search_api_query_alter(). */ function my_module_search_api_query_alter($query) { $base_filter = $query->createFilter('AND'); $base_filter->condition('status', 1); $subfilter = $query->createFilter('OR'); $subfilter->condition('field_my_field', 'value'); $subfilter->condition('field_other_field', 'value2'); $base_filter->filter($subfilter); $query->filter($base_filter); }

Implementing your filters this way should really be a last resort. The other option that should be considered first is the node access system, which can define node-level permissions that can get indexed in services like Solr.

Categories: Drupal News

ThinkShout: Deploying a Jekyll Site on GitHub, Travis CI, and Amazon S3

Planet Drupal - Sat, 23/08/2014 - 3:00am

When we launched the new version of ThinkShout.com last spring, something glaring was missing. That little something is what companies like Pantheon and Acquia have worked so hard to solve for more complex Drupal sites, namely a deployment workflow making it dead simple to deploy changes to your site and preview them before publishing to a production server. At the time of launch, we had some rudimentary tools in place, namely a set of Rake tasks to build the site and deploy to separate staging and production environments.

This worked fine for the uber geeks among us, who had a full ruby stack running and were proficient using git and running terminal commands. But for those less technically inclined, not so good. Not to mention, the lack of automation meant lots of room for errror. The talented team at Development Seed created Jekyll hook, a node based app that listens for notifications from GitHub then builds and deploys the site based on a number of configuration options or customizations to the build script. That seemed like a good solution, and we even started work on our own fork of the project. It was moving along nicely, and we had it running on Heroku, which largely eliminated the need for maintaining a server. Our customizations allowed us to deploy to S3 using the powerful s3_website gem and deploy to different buckets depending on the branch being committed. Still, this solution required a good deal more care and maintenance than a typical site hosted on Pantheon or Aquia and lacked any built in visual status or notifications.

Around the same time, I received a great tip while attending CapitalCamp: use Travis CI to test, build, and deploy the site. This was such a such a simple and great idea that I couldn't help but slap myself on the head for not thinking of it sooner. Travis is one of the leading continuous integration platforms with tight GitHub integration. It's free for open source projects and charges a modest monthly fee for private ones. It's also dead simple to configure, comes with loads of built in features, and requires little to no ongoing maintenance. While I knew of Travis, you can't help but see those nice "build passing" images on all your favorite open source projects, ![Build Passing](https://api.travis-ci.org/travis-ci/travis-web.svg?branch=master, I didn't realize just how powerful it can be. Some highlights include:

  • Supports all the major platforms including PHP, Ruby, Node, Python, and Java.
  • Lots of major databases and service are avaialble, E.g., MySql, PostgreSQL, Redis, Memcache, etc.
  • Has built in notifications via email, IRC, and other popular services
  • Can run your test suites and report back the status
  • Built in deployment to a number of platforms such as Heroku and Amazon, in addition to your own server.

The secret to all this Travis magic lies in a .travis.yml file located in the project root. For ThinkShout.com, it looks something like this:

language: ruby rvm: 2.0.0 script: "./_scripts/travis_build.sh" branches: only: - master - live env: global: - secure: ... - secure: ... notifications: hipchat: rooms: secure: ...

I won't go through this line by line, there's great documentation for that, but basically this tells Travis:

  • That we need an environment running Ruby 2.0.0.
  • To execute ./_scripts/travis_build.sh for our build.
  • To only trigger the build on the master and live branches.
  • Triggers a notification in our HipChat project room.

The build scrip is very simple as well:

#!/bin/bash if [[ $TRAVIS_BRANCH == 'master' ]] ; then bundle exec rake stage elif [[ $TRAVIS_BRANCH == 'live' ]] ; then bundle exec rake publish else echo 'Invalid branch. You can only deploy from master and live.' exit 1 fi

While we could put script commands directly into .travis.yml, having a bash script affords us some additional flexibility, in our case to deploy to different S3 buckets based on the branch being committed to.

When all is said and done, we have a simple automated deployment workflow, as illustrate below

Now the ThinkShout.com deployment workflow goes something like this:

  1. Make a commit to the master branch. This can be done directly in GitHub, using Prose.io, or the old fashioned way in your own working copy. Note that new features are done in feature branches, which do not trigger a build, and are eventually merged into master for review.
  2. The changes are pushed to our staging site for review within a couple minutes.
  3. When everything looks good, a pull request is opened comparing master to live.
  4. After any final discussions are complete, the pull request is merged and the code is pushed to the production S3 bucket.

That's it, done. No Ruby stack, no Jekyll build or compass compile, no worrying about S3 access keys. We're excited to refine this workflow further, including adding automated tests using PhantomJS, and put it to a real test for an upcoming site launch for a client. Stay tuned!

Categories: Drupal News

Lullabot: Updating the Drupalize.Me Video Experience

Planet Drupal - Fri, 22/08/2014 - 11:00pm

In this episode the Drupalize.Me team talks about the new video experience we have on the site. We start off by explaining why we made the changes, and what changes we made. We completely updated both the video player and the video pages, and the entire video delivery system, which gave us the ability to offer new features, like toggling playback speed. Most of the episode focuses on the timeline and process we used to tackle this monumental change. Join us to get a glimpse behind the scenes at Drupalize.Me.

Categories: Drupal News

DrupalCon Amsterdam: Trying out joind.in for DrupalCon Amsterdam

Planet Drupal - Fri, 22/08/2014 - 10:21pm

Lewis Nyman has been discussing the difficulties of being a DrupalCon track chair and how we can make it easier for session submitters to collect their speaking history using a service like joind.in or Lanyrd.

A trial page has been set up for DrupalCon Amsterdam on joind.in, you can sign up to attend and test the experience here - https://joind.in/event/view/2355

Get involved in the discussion on groups.drupal.org.

Categories: Drupal News

Mediacurrent: New eBook: Warning Signs you are Outgrowing your CMS

Planet Drupal - Fri, 22/08/2014 - 7:02am

Is your Content Management System (CMS) really working for you? In this eBook, Solutions Architect Paul McKibben takes a lighthearted look at common issues you might find in an aging CMS:

Categories: Drupal News

Get Pantheon Blog: Headless Drupal Demo - Working Code and Call to Action

Planet Drupal - Fri, 22/08/2014 - 4:18am

We had a pretty good turnout here at Pantheon HQ for a Headless Drupal themed SF Drupal User’s Group:

Good crowd at SFDUG for #headlessDrupal pic.twitter.com/55miCJdzSl

— Josh Koenig (@outlandishjosh) August 19, 2014

The excitement is clear. So what's next?

Put Your Code Where Your Mouth Is

In any place where there's much excitement, there also tends to be a lot of discussion about what could or should be done. My suggestion is that we focus on creating real-world implementations with various JS frameworks and other API consuming clients.

To that end, I'm putting my code where my mouth is. Here's a working demo site:

I've placed the rough demo code online at GitHub, so you can set up your own. Feel free to experiment with different implementations, or help out with the listed TODOs. Pull requests are welcome!

Next Steps

My ultimate vision is that we have a commonly accepted goal for demonstration implementations, and we find people who want to work on them for a number of popular JS MVC frameworks. It would also be interesting to have people do straight API implementations in raw Python or Ruby, or even just using Curl.

The end result would be something similar to TodoMVC, which helps people evaluate and embrace different JS frameworks. Having a repository of implementations would definitely speed up the process of attracting front-end developers to think about Drupal as a back-end for future projects.

At the same time, by focusing on practical/working implementations, we can much more effectively provide input and guidance to the continued core (and contrib) development. There's no need for Drupal engineers to build Headless in a vacuum.

Stay tuned for more updates as we see the outcomes of the community response. Based on what I've seen online and at the SF DUG meetup, I think some really exciting energy will come out of this.

Blog Categories: Engineering
Categories: Drupal News

Midwestern Mac, LLC: Solr for Drupal Developers, Part 2: Solr and Drupal, A History

Planet Drupal - Fri, 22/08/2014 - 12:40am

Drupal has included basic site search functionality since its first public release. Search administration was added in Drupal 2.0.0 in 2001, and search quality, relevance, and customization was improved dramatically throughout the Drupal 4.x series, especially in Drupal 4.7.0. Drupal's built-in search provides decent database-backed search, but offers a minimal set of features, and slows down dramatically as the size of a Drupal site grows beyond thousands of nodes.

In the mid-2000s, when most custom search solutions were relatively niche products, and the Google Search Appliance dominated the field of large-scale custom search, Yonik Seeley started working on Solr for CNet Networks. Solr was designed to work with Lucene, and offered fast indexing, extremely fast search, and as time went on, other helpful features like distributed search and geospatial search. Once the project was open-sourced and released under the Apache Software Foundation's umbrella in 2006, the search engine became one of the most popular engines for customized and more performant site search.

As an aside, I am writing this series of blog posts from the perspective of a Drupal developer who has worked with large-scale, highly customized Solr search for Mercy (example), and with a variety of small-to-medium sites who are using Hosted Apache Solr, a service I've been running as part of Midwestern Mac since early 2011.

Timeline of Apache Solr and Drupal Solr Integration

If you can't view the timeline, please click through and read this article on Midwestern Mac's website directly.

A brief history of Apache Solr Search and Search API Solr

Only two years after Apache Solr was released, the first module that integrated Solr with Drupal, Apache Solr Search, was created. Originally, the module was written for Drupal 5.x, but it has been actively maintained for many years and was ported to Drupal 6 and 7, with some relatively major rewrites and modifications to keep the module up to date, easy to use, and integrated with all of Apache Solr's new features over time. As Solr gained popularity, many Drupal sites started switching from using core search or the Views module to using Apache Solr.

Categories: Drupal News

erdfisch: New module: Image widget default image

Planet Drupal - Fri, 22/08/2014 - 12:39am

As part of a client project we recently had the requirement of displaying an image field's default image in the node add form, before the user had uploaded a picture. This seemed simple enough to implement but thinking about it I realized this was quite a generic feature request and that such a module might prove useful for others. Rather by accident I even noticed that Drupal.org itself employs the same feature on user profile pages.

Thus, I hereby present the Image widget default image module, which does just that: If you have provided a default image for an image field and the user has not yet uploaded an image for that field, it displays that default image as a preview. Not many bells or whistle, but it works. (I can claim that, because it comes with tests. ) If it does not work for you or if you have any questions please leave a comment or open an issue on Drupal.org.

Weitere Bilder: 
Categories: Drupal News

Code Karate: Multiple Views Part 2

Planet Drupal - Thu, 21/08/2014 - 10:47pm
Episode Number: 163

In part 2 of the multiple views series you will learn how to add the jQuery needed to switch between multiple classes. By having the ability to use multiple classes, we will (in part 3) be able to use CSS to change the look and feel of the same view.

Here is the jQuery code used to switch between grid and list view:

Tags: DrupalViewsDrupal 7Theme DevelopmentDrupal PlanetJavascriptJQuery
Categories: Drupal News

DrupalCon Amsterdam: Training spotlight: Professional Agile Project Management For Drupal Projects

Planet Drupal - Thu, 21/08/2014 - 9:51pm

Over 30 people attended this wildly successful training at DrupalCon Austin. Now is your chance to attend this training at DrupalCon Amsterdam!

In this course, we cut past the evangelism that exists around Agile, and instead focus on real-world practical training that you can put into action.

The course is delivered using the Agile Scrum techniques it teaches. At the start, delegates see the backlog of requirements that the product owner (a trainer) has developed for the course, and the prioritization of those requirements. The course then progresses in one-hour periods of work called sprints, working through training modules from the top of the backlog.

Part way through the morning, delegates are ready to take over as the product owners. They will take responsibility for specifying the requirements for the course, based on the needs and interests of the delegates in the room, re-prioritise them, and even add completely new requirements. Our trainers will respond to these changes by creating new training modules on the fly based on real project experience, to provide the highest possible value to the delegates.

Through this approach we demonstrate and explain the processes of Agile many times, and we also demonstrate their value, and delegates leave with a real insight into how they could apply agile, and handle some of the challenges they may have faced.

The trainers are highly experienced Agile coaches, who mentor teams at Wunderkraut (known as WunderRoot in the UK), as well as consulting with large clients about ensuring successful delivery of their projects.

Meet the Trainers from Wunderkraut

Steve Parks (steveparks), UK Managing Director
Vesa Palmu (wesku), CEO
Roel De Meester (demeester_roel), CTO Benelux
Florian Huber (fuber), Project Manager, Scrum Master

Attend this Drupal Training

This training will be held on Monday, 29 September from 09:00-17:00 at the Amsterdam RAI during DrupalCon Amsterdam. The cost of attending this training is €400 and includes training materials, meals and coffee breaks. A DrupalCon ticket is not required to register to attend this event.

Our training courses are designed to be small enough to provide attendees plenty of one-on-one time with the instructor, but large enough that they are a good use of the instructor's time. Each training course must meet its minimum sign-up number by 5 September in order for the course to take place. You can help to ensure your training course takes place by registering before this date and asking friends and colleagues to attend.

Register today

Categories: Drupal News

Four Kitchens: DrupalCamp Twin Cities: Frontend Wrap-up

Planet Drupal - Thu, 21/08/2014 - 7:58am

This year’s Twin Cities DrupalCamp had no shortage of new faces, quality sessions, trainings, and after parties. Most of my time was spent in frontend sessions and talking with folks. Being that I live in Minneapolis, this camp is especially rewarding from a hometown Drupal represent kind of perspective. Below are some of my favorite sessions and camp highlights.

Community Drupal
Categories: Drupal News

Mediacurrent: UX - Above the Fold & Scrolling

Planet Drupal - Thu, 21/08/2014 - 7:51am

More and more often I am asked, when putting together a design Drupal for a website, what is the importance of designing above the fold and whether or not that today’s users will scroll to read content.

Categories: Drupal News

Drupal Association News: Drupal Association Values

Planet Drupal - Thu, 21/08/2014 - 4:54am

In my experience as an organization leader one of the most important tools in my toolbox has always been my personal values. It's been my experience that even when data points in one direction, and best practices say you should approach the problem in this way, it's always been my values that help me make the best decisions as an individual. And the truth is, there are very rarely any decisions that are 100% clear cut. In almost every decision, some amount of judegment is required. 

That's why I believe so strongly in defining values for the organizations I work for. When everyone is working from a shared sense of values, we're making decisions - even big giant judgement calls - from the same perspective. To that end, we spent some time this year working with the board and staff to develop a values statement for the Drupal Association.

We started in a board retreat, brainstorming the implicit (though not documented) values of the Association and the larger Drupal Community. The board ranked their favorites, and then we created a committee of board and staff to draft some language. Those initial values statements were vetted by both the entire Association staff and the full board, and then additional edits were made. Here's the result of that process:

The Drupal Association shares the values of our community, our staff, and open source projects:

  • TEAMWORK: We add value to the Drupal community by helping each other solve problems to create quality human and digital experiences.
  • COMMUNICATION: We value communication. We seek community participation. We are open and transparent.
  • ACTION: We act decisively and proactively, embracing what we learn from both our successes and our mistakes.
  • RESPECT: We respect and value inclusivity in our global community and strive to recognize, understand, and respond to its needs.
  • FUN: We create environments that embrace humor resulting in fun, positive, supportive and safe interactions.

To be clear, these are the values we're defining for our staff. We're not trying to impose these values on the larger community. However, we do hope they reflect the values you feel are important in the larger Drupal community as well. We also want to recognize that writing down the words is one thing, and living up to them is something else. We intend to live these values in all our work. 

Now it's your turn. The values are set by the board and staff, but we want to make sure we know what you think.

Flickr photo: Howard Lake

Categories: Drupal News

Nuvole: Git workflow for managing Drupal 8 configuration

Planet Drupal - Thu, 21/08/2014 - 4:30am
The D8 way to replace "features-update" and "features-revert-all".

This is a preview of Nuvole's training at DrupalCon Amsterdam: An Effective Development Workflow in Drupal 8.

One of the new key features of Drupal 8 is the possibility to deal with configuration in code. Since configuration is now in text files, we want to put it under version control in Git to enjoy the many advantages this brings: comparing configuration states, keeping a history of configuration changes and even moving configuration between sites.

Setup

We will assume that you have a development version of Drupal 8, git and drush available on your system. You can set up your Drupal git repository in several ways. One of them is outlined in Building a Drupal site with Git on drupal.org. The document is written for Drupal 7, but can easily be adapted for Drupal 8.
Another, probably simpler method is to simply download a Drupal 8 (alpha) release and initialise a new repository with it.

In either case you should copy example.gitignore to .gitignore and adapt it to your needs to prevent settings.php and the files directory from being versioned.

The next step is to make sure that a configuration directory is versionable. By default Drupal 8 will place the staging directory under sites/default/files and it is considered a good practice to not version that location, but an alternative location can easily be specified in settings.php:

<?php
$config_directories['staging'] = 'config/staging';
?>

It is also possible and even advisable to specify a directory outside of the web root of course. In that case you would put the parent directory of your web root where drupal is under version control and use ../config/staging. We will later see that it is also possible to add more directories and keys to the $config_directories variable.

Because the configuration management of Drupal 8 only works between different instances of the same site, the different instances of the site need to be cloned. Cloning a Drupal 8 site is done the same way as cloning a Drupal 7 site. Just dump the database of the site to clone and import it in the other environment.

Development

After cloning your site you can go ahead and start configuring your site.
Once the part of the configuration you were working on is done the whole configuration of the site needs to be exported.

local$ drush config-export staging
The current contents of your export directory (config/staging) will be deleted. (y/n): y
Configuration successfully exported to config/staging.

Next, you need to merge the work of other developers. In some cases it may be enough to simply use git pull, otherwise the configuration has to be merged after it has been committed:

  • Add all configuration to git and commit it.

  • Use git pull (or git fetch and git merge) and resolve any conflicts if necessary.

Git can merge changes in text files quite well, but git does not know about Drupal and its yaml format for configuration. It is, therefore, important to verify that the merged configuration makes sense and is valid. In most cases it will probably not be an issue and just work, but it is always better to be vigilant and be on the safe side. So, after merging, you should always run:

local$ drush config-import staging

If the import went smooth you can push the configuration to the remote repository. Otherwise the configuration needs to be fixed first.

Deployment

The simplest case is when the configuration on the production site has not been changed. There is an interesting Configuration Read-only mode module that can enforce this.

If the configuration did not change deploying the new configuration is simply:

remote$ git pull
remote$ drush config-import staging

If the configuration changes on the production site, it is best to frequently export the live configuration into a dedicated directory.
Add a new config directory in settings.php:

<?php
$config_directories['to_dev'] = 'config/to_dev';
?> remote$ drush config-export to_dev -y

Add, commit and push it to the production branch so that the developers can deal with it and integrate the changes into the configuration which will be deployed next. Exporting the configuration into a dedicated directory rather than the staging directory avoids the danger that merge conflicts happen on the production site. The deployment to the production site should be kept hassle free, so it should always be safe to pull from git and import the configuration without the risk of a conflict.

Important notes

It is important to first export the configuration changes and then pull changes from collaborators because the exporting action wipes the directory and re-populates it with the active configuration. Since everything is in git, you can recover from such a mistake without much difficulty but why make your life complicated.

Import the configuration before pushing it to the remote repository. Broken configuration breaks the site, be a nice co-worker.

Git doesn't solve everything! Imagine Alice and Bob start with the same site, it has one content type and among others an "attachment" field. Alice deletes the attachment field, exports the configuration and pushes it to git. In the meantime, Bob creates a new content type and adds the attachment field to it. Bob exports his configuration, merges Alice's configuration changes without a problem (the changes are separate files) and imports the merged configuration. The attentive reader sees where this leads. The commit of Alice deletes the field storage for the attachment field, but Bob added a field instance which depends on the field storage. The exported configuration now contains a field instance that can't be imported.
At the time of writing, drush will signal a successful import but doesn't actually import it while the UI is more helpful and complains that the attachment field instance was not imported due to the missing field storage.

Tags: Drupal Planet, Drupal 8, Code Driven DevelopmentImage: 
Categories: Drupal News

Acquia: Drupal 8 & Empowerment through Drupal

Planet Drupal - Wed, 20/08/2014 - 6:35pm

Part 1 of a 2-part conversation with Angie Byron in front of the cameras at NYC Camp 2014. In this part of our conversation we go over some of the inspiring and thought-provoking ideas we encountered there, and then jump to some of the benefits to users of the technical improvements built into Drupal 8.

Categories: Drupal News

Kristian Polso: How to create Drupal Commerce products programmatically

Planet Drupal - Wed, 20/08/2014 - 5:00pm
Sometimes you just need to get your hands dirty and start adding Drupal Commerce products programmatically. Luckily that is not that hard of an thing to do.
Categories: Drupal News

Modules Unraveled: 115 Drupal Core Gittip Team with Jennifer Hodgdon, Bojhan Somers Alex Pott and Cathy Theys - Modules Unraveled Podcast

Planet Drupal - Wed, 20/08/2014 - 4:00pm
Published: Wed, 08/20/14Download this episodeGitTip
  • What is GitTip? How does it work?

  • What is a GitTip team?

Drupal Core GitTip Team
  • How did the Drupal Core team come about? What prompted it’s genesis?

  • Who is the organizer of the Drupal Core team, and who is benefiting from it?
    19 members, Alex and Cathy are administering the group, a couple are on vacation.
    16 others are taking money.

  • On the GitTip page it says your goal is $5,000 US/week. What would that cover?
    Cathy: This week is the first week that we will not be able to fund the modest goal of giving people $64/week. The past few weeks we have been paying out $700. We have now eaten all our balance and have only $350 coming in this week.
    The $5k goal is what a guess at funding 6 people about ¼ time.

  • What have you all been working on lately as a result of this funding?
    Cathy: tips are for work already done, so… I'm not sure. Maybe it motivates future work, or planning to be able to do future work? Jen? Bojhan?
    What has this funding enabled you to do?

Episode Links: GitTip Team PageGitTip MembersDrupal Core Gittip Team FAQBojhan on drupal.orgBojhan on TwitterJennifer on drupal.orgJennifer on TwitterJennifer’s company siteCathy on drupal.orgCathy on TwitterAlex on drupal.orgAlex on TwitterCore Conversation at AmsterdamDrupal.org Working GroupsDries call for many separate companies to hire core developersJennifer’s case for core developers on staff at small shopsAlex Pott hired by Chapter ThreeCathy hired by BlackMeshGittip issue to change how team funds are splitTags: Drupal 8Drupal Coreplanet-drupal
Categories: Drupal News

PreviousNext: Drupal 8 Now: PHPUnit tests in Drupal 7

Planet Drupal - Wed, 20/08/2014 - 3:10pm

Drupal 8 comes with built-in support for PHP Unit for unit-testing, the industry standard for unit-tests.

But that doesn't mean you can't use PHP Unit for your testing and CI in Drupal 7, if you structure your code well.

Read on to find out what you need to do to use PHP Unit in Drupal 7.

Categories: Drupal News

Phase2: Profiling Drupal Performance with PHPStorm and Xdebug

Planet Drupal - Wed, 20/08/2014 - 7:34am

Profiling is about measuring the performance of PHP code, at least when we are talking about Drupal and Xdebug. You might need to profile your site or app if you work at a firm where performance is highly scrutinized, or if you are having problems getting a migration to complete. Whatever the reason, if you have been tasked with analyzing the performance of your Drupal codebase, profiling is one great way of doing so. Note that Xdebug’s profiler does not track memory usage. If you want to know more about memory performance tracking you should check out Xdebug’s execution trace features.

Alright then lets get started! 

Whoa there cowboy! First you need to know that the act of profiling your code is itself taking resources to accomplish. The more work your code does, the more information that the profiler stores; file sizes for these logs can get very big very quickly. You have been warned. To get going with profiling Drupal in PHPStorm and Xdebug you need:

To setup your environment, edit your php.ini file and add the following lines:

xdebug.profiler_output_dir=/tmp/profiler/ xdebug.profiler_enable=on xdebug.profiler_trigger=on xdebug.profiler_append=on

Depending on what you are testing and how, you may want to adjust the settings for your site. For instance, if you are using Drush to run a migration, you can’t start the profiler on-demand, and that affects the profiler_trigger setting. For my dev site I used the php.ini config you see above and simply added a URL parameter “XDEBUG_PROFILE=on” to my site’s url; this starts Xdebug profiling from the browser.

To give you an idea of what is possible, lets profile the work required to view a simple Drupal node. To profile the node view I visited http://profiler.loc/node/48581?XDEBUG_PROFILE=on in my browser. I didn’t see any flashing lights or hear bells and whistles, but I should have a binary file that PHPStorm can inspect, located in the path I setup in my php.ini profiler_output_dir directive.

Finally lets look at all of our hard work! In PHPStorm navigate to Tools->Analyze Xdebug Profile Snapshot. Browse to your profiler output directory and you should see at least one cachgrind.out.%p file (%p refers to the process id the script used). Open the file with the largest process id appended to the end of the filename.

We are then greeted with a new tab showing the results of the profiler.

The output shows us the functions called, how many times they were called, and the amount of execution time each function took. Additionally, you can see the hierarchy of all function calls and follow potential bottlenecks down to their roots.

There you have it! Go wild and profile all the things! Just kidding, don’t do that.

Categories: Drupal News

Phase2: Profiling Drupal Performance with PHPStorm and Xdebug

Planet Drupal - Wed, 20/08/2014 - 7:00am

Profiling is about measuring the performance of PHP code, at least when we are talking about Drupal and Xdebug. You might need to profile your site or app if you work at a firm where performance is highly scrutinized, or if you are having problems getting a migration to complete. Whatever the reason, if you have been tasked with analyzing the performance of your Drupal codebase, profiling is one great way of doing so. Note that Xdebug’s profiler does not track memory usage. If you want to know more about memory performance tracking you should check out Xdebug’s execution trace features.

Alright then lets get started! 

Whoa there cowboy! First you need to know that the act of profiling your code is itself taking resources to accomplish. The more work your code does, the more information that the profiler stores; file sizes for these logs can get very big very quickly. You have been warned. To get going with profiling Drupal in PHPStorm and Xdebug you need:

  • PHPStorm
  • PHP with the Xdebug extension
  • A website running on Drupal.

To setup your environment, edit your php.ini file and add the following lines:

xdebug.profiler_output_dir=/tmp/profiler/ xdebug.profiler_enable=on xdebug.profiler_trigger=on xdebug.profiler_append=on

Depending on what you are testing and how, you may want to adjust the settings for your site. For instance, if you are using Drush to run a migration, you can’t start the profiler on-demand, and that affects the profiler_trigger setting. For my dev site I used the php.ini config you see above and simply added a URL parameter “XDEBUG_PROFILE=on” to my site’s url; this starts Xdebug profiling from the browser.

To give you an idea of what is possible, lets profile the work required to view a simple Drupal node. To profile the node view I visited http://profiler.loc/node/48581?XDEBUG_PROFILE=on in my browser. I didn’t see any flashing lights or hear bells and whistles, but I should have a binary file that PHPStorm can inspect, located in the path I setup in my php.ini profiler_output_dir directive.

Finally lets look at all of our hard work! In PHPStorm navigate to Tools->Analyze Xdebug Profile Snapshot. Browse to your profiler output directory and you should see at least one cachgrind.out.%p file (%p refers to the process id the script used). Open the file with the largest process id appended to the end of the filename.

We are then greeted with a new tab showing the results of the profiler.

 

The output shows us the functions called, how many times they were called, and the amount of execution time each function took. Additionally, you can see the hierarchy of all function calls and follow potential bottlenecks down to their roots.

There you have it! Go wild and profile all the things! Just kidding, don’t do that.

Categories: Drupal News

Theme by Danetsoft and Danang Probo Sayekti inspired by Maksimer