Categories: geek » development

RSS - Atom - Subscribe via email

Learning browser-based testing with Selenium

Posted: - Modified: | development, geek, work

I want to get better at testing my applications so that clients and end-users run into fewer bugs. I’m learning how to use Selenium to write browser-based tests. So far, I’ve written eight tests and fixed three bugs. This is good.

I’m using the Selenium IDE, and I’m looking forward to trying other options. I like the way that the Selenium IDE lets me record and step through tests easily. The Selenium Stored Variables Viewer plugin was really helpful, because it made it easy for me to store values and view them. I’m slowly getting the hang of different commands and asserts. Next week, I’m going to read the command reference so that I can index the possibilities.

People tell me I’m a fast developer. I want to try swapping some of that speed for better accuracy – slowing down and doing things right, with tests to back that up. It feels like it takes a lot of time to click around and wait for the pages to respond, or even to run these web-based tests and iterate until I’ve gotten them right, but it’s better for me to do it than for other people to run into these errors. Besides, with tools and metrics, I can make testing more like a game.

Onward and upward!

Git bisect and reversing a mistaken patch

Posted: - Modified: | development, geek

2011/12/09: Updated links

Using version control software such as git is like slicing the bread of programming. It lets you deal with changes in small chunks instead of having to troubleshoot everything at the same time. This comes in really handy when you’re trying to isolate a problem. If you can tell which change broke your system, then you can review just those changes and look for ways to fix them.

For example, some views I’d created in Drupal 6 had mysteriously vanished from the web interface. Fortunately, I’d exported them to code using Features, so I knew I could get them back. I needed to find out which change removed them so that I could make sure I didn’t accidentally undo the other relevant changes.

git bisect to the rescue! The idea behind git bisect is the same one behind the marvelous efficiencies of binary search: test something in the middle of what you’re looking at. If it’s good, take the later half and test the middle. If it’s bad, take the earlier half and test the middle. It’s like what people do when guessing a number between 1 and 100. It makes sense to start at 50 and ask: is the number greater than 50? If it is, ask: is the number greater than 75? And so on. Handy trick, except sometimes it can be difficult to add or subtract in your head and figure out the next number you should ask.

git bisect does that adding-up for you. You start with git bisect start in the root of your source tree. You tell it if the current version is considered broken, using git bisect bad. You tell it the last known working version, with git bisect good changeset-identifier. Then it takes the middle of that range. Test it to see whether it works, and type in git bisect good or git bisect bad depending on what you get. It’ll present you with another changeset, and another, until it can identify the first changeset that fails. If you can automate the test, you can even use the git bisect run command to quickly identify the problem.

Now that you’ve identified the relevant changeset, you can use git show changeset-identifier to look at the changes. If you save it to a file, you can edit the diff and then use the patch command to reverse and apply the diff. Alternatively, you can undo or tweak your changes by hand.

The git bisect section in the free Git SCM book has more information, as does the manual page. Hope this helps!

Switching back to Linux as my development host

Posted: - Modified: | development, geek, work

I switched back to using my Ubuntu partition as my primary development environment instead of using Windows 7. I still use a virtual machine to isolate development-related configuration from the rest of my system.

Linux makes better use of my computer memory. I have 4 GB of RAM on this laptop. My 32-bit Windows 7 can only access 3 GB of it, a limit I regularly run into. The resulting swapping slows down my development enough to be noticeable. I could switch to 64-bit Windows, but reinstalling is a disruption I don’t want to deal with right now. On Linux, my processes can access up to 4GB of memory each, which means there’s even room for future expansion. I’m at just the right level now – using 3.9 GB, but not swapping out.

Using Linux also means that it’s easy for me to edit files in my virtual machine. Instead of setting up Samba + Eclipse, I can use ssh -X to connect to my virtual machine and run Emacs graphically. If I want to use Eclipse for step-by-step debugging, I can use sshfs, smbfs, or NFS to mount the files.

The key things I liked about Microsoft Windows 7 were Autodesk Sketchbook Pro and Microsoft Onenote. I can draw a bit using the GIMP or Inkscape, although I really need to figure out my smoothing settings or whatever it is that would make drawing as fun as it is in those other programs. I don’t need those programs when I’m focused on development, though, and it’s easy enough to reboot if I want to switch.

Hibernate doesn’t quite work, but I’ve been suspending the computer or shutting it down, and that works fine. Pretty cool!

Managing configuration changes in Drupal

| development, drupal, geek, work

One of our clients asked if we had any tips for documenting and managing Drupal configuration, modules, versions, settings, and so on. She wrote, “It’s getting difficult to keep track of what we’ve changed, when, for that reason, and what settings are in that need to be moved to production versus what settings are there for testing purposes.” Here’s what works for us.

Version control: A good distributed version control system is key. This allows you to save and log versions of your source code, merge changes from multiple developers, review differences, and roll back to a specified version. I use Git whenever I can because it allows much more flexibility in managing changes. I like the way it makes it easy to branch code, too, so I can start working on something experimental without interfering with the rest of the code.

Issue tracking: Use a structured issue-tracking or trouble-ticketing system to manage your to-dos. That way, you can see the status of different items, refer to specific issues in your version control log entries, and make sure that nothing gets forgotten. Better yet, set up an issue tracker that’s integrated with your version control system, so you can see the changes that are associated with an issue. I’ve started using Redmine, but there are plenty of options. Find one that works well with the way your team works.

Local development environments and an integration server: Developers should be able to experiment and test locally before they share their changes, and they shouldn’t have to deal with interference from other people’s changes. They should also be able to refer to a common integration server that will be used as the basis for production code.

I typically set up a local development environment using a Linux-based virtual machine so that I can isolate all the items for a specific project. When I’m happy with the changes I’ve made to my local environment, I convert them to code (see Features below) and commit the changes to the source code repository. Then I update the integration server with the new code and confrm that my changes work there. I periodically load other developers’ changes and a backup of the integration server database into my local environment, so that I’m sure I’m working with the latest copy.

Database backups: I use Backup and Migrate for automatic trimmed-down backups of the integration server database. These are regularly committed to the version control repository so that we can load the changes in our local development environment or go back to a specific point in time.

Turning configuration into code: You can use the Features module to convert most Drupal configuration changes into code that you can commit to your version control repository.

There are some quirks to watch out for:

  • Features aren’t automatically enabled, so you may want to have one overall feature that depends on any sub-features you create. If you are using Features to manage the configuration of a site and you don’t care about breaking Features into smaller reusable components, you might consider putting all of your changes into one big Feature.
  • Variables are under the somewhat unintuitively named category of Strongarm.
  • Features doesn’t handle deletion of fields well, so delete fields directly on the integration server.
  • Some changes are not exportable, such as nodequeue. Make those changes directly on the integration server.

You want your integration server to be at the default state for all features. On your local system, make the changes you want, then create or update features to encapsulate those changes. Commit the features to your version control repository. You can check if you’ve captured all the changes by reverting your database to the server copy and verifying your functionality (make a manual backup of your local database first!). When you’re happy with the changes, push the changes to the integration server.

Using Features with your local development environment should minimize the number of changes you need to directly make on the server.

Documenting specific versions or module sources: You can use Drush Make to document the specific versions or sources you use for your Drupal modules.

Testing: In development, there are few things as frustrating as finding you’ve broken something that was working before. Save yourself lots of time and hassle by investing in automated tests. You can use Simpletest to test Drupal sites, and you can also use external testing tools such as Selenium. Tests can help you quickly find and compare working and non-working versions of your code so that you can figure out what went wrong.

What are your practices and tips?

2011-06-09 Thu 12:25

Thinking about our development practices

| development, geek, kaizen

We’re gearing up for another Drupal project. This one is going to be interesting in terms of workflow. I’m working with the clients, an IBM information architect, a design firm, another IBM developer, and a development firm. Fortunately, the project manager (Lisa Imbleau) has plenty of experience coordinating these inter-company projects.

I feel a little nervous about the project because there are a lot of things to be clarified and there’s a bit of time pressure. I’m sure that once we get into the swing of things, though, it’ll be wonderful.

I’m used to working with other developers within IBM, and I’m glad I picked up a lot of good practices from the people I’ve had the pleasure to work with over the years. I’m looking forward to learning even more from the people I get to work with this time around.

In particular, I’m looking forward to:

  • learning from how Lisa manages the project, clarifies requirements, and coordinates with other companies
  • learning from the other developers about what works and doesn’t work for them
  • planning more iteratively and getting more testing cycles in
  • implementing continuous integration testing using Hudson and Simpletest
  • getting even deeper in Drupal: Views, Notifications, maybe Organic Groups
  • using a git-integrated issue tracker such as Redmine
  • … while knowing when to just use pre-built modules, of course

It’s also a good opportunity to figure out which of our practices are new to others, and to write about those practices and improve them further. Some things that have turned up as different:

  • We organize our Drupal modules into subdirectories of sites/all/modules/: features, custom, contrib, and patched.
  • I use Simpletest a lot, and would love to help other people with it or some other automated testing tool.

Much learning ahead!

VMWare, Samba, Eclipse, and XDebug: Mixing a virtual Linux environment with a Microsoft Windows development environment

Posted: - Modified: | development, drupal, geek

I’m starting the second phase of a Drupal development project, which means I get to write about all sorts of geeky things again. Hooray! So I’m investing some time into improving my environment set-up, and taking notes along the way.

This time, I’m going to try developing code in Eclipse instead of Emacs, although I’ll dip into Emacs occasionally if I need to do anything involving keyboard macros or custom automation. Setting up a good Eclipse environment will help me use XDebug for line-by-line debugging. var_dump> can only take me so far, and I still haven’t figured out how to properly use XDebug under Emacs. Configuring Eclipse will also help me help my coworkers, who tend to not be big Emacs fans. (Sigh.)

So here’s my current setup:

  • A Linux server environment in VMWare, so that I can use all the Unix tools I like and so that I don’t have to fuss about with a WAMP stack
  • Samba for sharing the source code between the Linux VM image and my Microsoft Windows laptop
  • XDebug for debugging
  • Eclipse and PDT for development

I like this because it allows me to edit files in Microsoft Windows or in Linux, and I can use step-by-step debugging instead of relying on var_dump.

Setting up Samba

Samba allows you to share folders on the network. Edit your smb.conf (mine’s in /etc/samba/) and uncomment/edit the following lines:

security = user

[homes]
   comment = Home Directories
   browseable = no
   read only = no
   valid users = %S

You may also need to use smbpasswd to set the user’s password.

Xdebug

Install php5-xdebug or whatever the Xdebug package is for PHP on your system. Edit xdebug.ini (mine’s in /etc/php5/conf.d) and add the following lines to the end:

[Xdebug]
xdebug.remote_enable=on
xdebug.remote_port=9000
xdebug.remote_handler=dbgp
xdebug.remote_autostart=1
xdebug.remote_connect_back=1

Warning: this allows debugging access from any computer that connects to it. Use this only on your development image. If you want to limit debugging access to a specific computer, remove the line that refers to remote_connect_back and replace it with this:

xdebug.remote_host=YOUR.IP.ADDRESS.HERE

Eclipse and PDT

I downloaded the all-in-one PHP Development Toolkit (PDT) from http://www.eclipse.org/pdt/, unpacked it, and imported my project. After struggling with Javascript and HTML validation, I ended up disabling most of those warnings. Then I set up a debug configuration that used Xdebug and the server in the VM image, and voila! Line by line debugging with the ability to look in variables. Hooray!

2011-05-31 Tue 17:37

Rails: Preserving test data

Posted: - Modified: | development, geek, rails

I’m using Cucumber for testing my Rails project. The standard practice for automated testing in Rails is to make each test case completely self-contained and wipe out the test data after running the test. The test system accomplishes this by wrapping the operations in a transaction and rolling that transaction back at the end of the test. This is great, except when you’re developing code and you want to poke around the test environment to see what’s going on outside the handful of error messages you might get from a failed test.

I set up my test environment so that data stays in place after a test is run, and I modified my tests to delete data they need deleted. This is what I set in my features/support/env.rb:

Cucumber::Rails::World.use_transactional_fixtures = false

I also removed database_cleaner.

You can set this behaviour on a case-by-case basis with the tag @no-txn.

Running the tests individually with bundle exec cucumber ... now works. I still have to figure out why the database gets dropped when I do rake cucumber, though…

2011-04-24 Sun 16:21