Rick Hurst Web Developer in Bristol, UK

Menu

Month: September 2011

Tell us about the browser wars grandpa!

After recently reading this blog post, I was reminiscing about the circuitous route that got me into web development. I’ve given a brief account of this before, but I thought it was worth recounting in detail, as it may be of interest to those looking to get into the field. I also just like to waffle about the olden days…

In 1997 I was newly arrived in Bristol, a recent-ish graduate in Environmental Science, with the compulsory couple of years post-uni experience travelling and doing random jobs, starting life in a new city and wondering what life had in store for me. I started by trying to forge a career in conservation, by juggling voluntary work at a small environmental organisation with a painful part-time job in a call centre to (barely) pay the bills. My job applications were getting me nowhere – although I had a degree, I had very little relevant work experience and I was lucky to receive a rejection letter, let alone an interview. The salt in the wound was getting turned down for a part-time minimum wage admin job at the organisation where I was volunteering (ironically the reason given was “lack of experience with databases”!).

At the time I hadn’t used the internet, I had seen people using it a couple of years previously in the computer lab when I was at university – a classmate showed me a terminal screen displaying a text web page with some jokes on it, and I was severely underwhelmed. So several years had passed with no inclination to use the internet, until someone (I wish I could remember who – I owe them a pint!), suggested to me that if I saved my CV out of MS word as HTML I could upload it to the internet and people might find it and offer me a job. Willing to try anything, after a bit of research (talking to people – remember i’d never been on the internet at this point) I took a trip to PC world and bought a dial-up modem, and picked up a freeserve CD-ROM. That evening I uploaded my HTML CV as the index page of my free webspace, and then surfed the internet for the first time.

Disappointed at the lack of job offers the next day, I bought a graphic design magazine design which came with a CD-ROM containing some HTML website templates. I hacked around with some of them using notepad, though it was all a bit confusing – framesets, javascript rollover images etc. but I was pleased that I was able to work out how to customise the pages and add my own content without needing any specialist software, so I proceeded to create a website for the environmental organisation I was volunteering for, along with a few other personal project sites. I bought a copy of “HTML for dummies” and learned how to create web sites from scratch. I also obsessively studied the HTML source of any site I came across – I wasn’t happy until I understood how it worked.

An interesting thing that happened a few weeks later is that one of my first emails was a complaint from a design agency. When they typed into Altavista the name of the environmental organisation they had just built a website for, the fugly site that I had built came up as number one, but their site didn’t show up at all. I took a look at their version, and when it (eventually) loaded, saw that they had created a “website” that consisted of a single jpeg. I hadn’t submitted my version to any search engines, but I had taken time to contact other environmental organisations who had websites and ask if they would add a link to my site. I did this purely because at the time I assumed you had to pay to get listed on a search engine, so people would only find my DIY site by surfing to it. At this point the penny dropped that although my experience amounted to a few evenings experimentation, I was already ahead of a professional design agency in my knowledge of how the internet works.

In 1998 I got a new temp admin job (ok, it was “receptionist”, but I like to gloss over that) in the computer science dept of a university, where I had access to relatively fast always-on internet. Even though it wasn’t part of my remit, I volunteered to help keep their department intranet up to date and spent most of my time teaching myself HTML and javascript while I pretended to work. I also started experimenting with “paint shop pro”, and voluntarily building websites for friends and colleagues. This job was a massive improvement over the call centre, but I was still pretty much on minumum wage and felt like I was going nowhere. Still at least I had a new hobby..

A few months in I took a life-changing phone call. A cold call from a recruitment agent is the bane of any receptionist’s life, but this one phoned the department asking if we’d stick a notice up asking for graduates with HTML knowledge as he a had a big client looking for several people to start ASAP. I obliged, but also, after some dilemma over whether I was good enough (scarred by constant job application rejection) emailed him a link to my personal website and CV, which now contained some inflated references to my web design experience. A week later I went for an interview. Two weeks later I was the director of a one-man-band ltd company, sitting on a train to Cardiff to start work as a contract “web designer” in a new department of BT.

I suddenly had a career – for the first time ever I was making an acceptable living, and for the first time I had colleagues to swap notes with. Someone taught me how to use layers in photoshop, someone else how to lay out web pages using tables and framesets. The “maverick” of the department demoed some experiments he had been doing with “Cascading Style Sheets”, which could be used on newer browsers such as IE4. We churned out hundreds of terrible, but improving websites giving small businesses their first web presence. We huddled together round a monitor confused and delighted as we stumbled across some of the first macromedia flash websites appearing on the web. HTML was dead we naively decided – we needed to learn flash fast or be become obsolete. Luckily, I was also introduced to using ASP and access databases to create dynamic web sites, and also started experimenting with PHP. The rest is on linked in.

The point i’m laboriously making is that although the techniques have moved on, i’m self-taught and i’m colleague-taught. There were no college courses back then that taught the skills needed because the technology was being constantly invented and reinvented. This isn’t just a history lesson, the technology is *still* being invented and the courses and tutors can’t keep up. The nearest we have now are the workshops run by industry professionals and conferences with talks by industry professionals. But, even then don’t think you’ll be learning anything that you can’t learn from someone’s blog, from studying the source code of their site (for front-end at least, try github for serverside code examples), or from someone you chat to on a mailing list, twitter, IRC channel or forum – the cutting edge layout techniques are being invented and experimented with right now by a fifteen year old in his/ her bedroom, months or years before they will appear in a workshop or conference.

I still have to teach myself new techniques or technologies every time I start a new project, and will continue to keep learning. Despite being in the game for over a decade, I still feel “grateful” that i’ve managed to find something I enjoy doing and can make a living from. I’m aware that I can’t become complacent – even though I don’t jump instantly on every new technique that gets shouted round the twitterverse, i’m aware that I need to keep up with anything pertinent or risk getting left behind. I also freely share my knowledge, via this blog, via twitter, via old-skool mailing lists and forums. I hope it is always this way – new techniques to keep things fresh and the best learning resources are not behind a pay-wall, allowing anyone with the determination and aptitude can pursue a career (or hobby) in web design and/or development.

If you are looking to learn web development, the best you can do is find yourself some free web space and set up a personal sandbox site to experiment and display what you are working on. Set up a blog to document it all. My lucky break came in the early days learning HTML just as there was a “goldrush” for people with HTML knowledge – you may have to aim a bit higher now. The goldrushes are still happening though – right now it seems to be mobile website development, facebook app/page development (shudder) and Drupal development, but who knows what it will this time next year. The point is you will already need to have experience in what employers are looking for, and the person to teach it to you is none other than yourself.

Luckily this is a unique industry where knowledge is shared freely and we all learn from each other. Whilst it my implode into bickering sometimes, we have an active, talkative, web design and development community. If you specialise in a particular technique or technology there are whole thriving sub-communities dedicated to talking about it, having meet-ups and grassroots conferences, discussing it, day by day, hour by hour. Go and find them and get involved!

Swoop Patagonia re-skin 3

screengrab of swoop reskin version 3

Last night Dan Fairs pushed the latest version of the django powered Swoop Patagonia site live. The site has several new content management features created by Dan with the help of Ben Mason allowing swoop staff to create and manage content in a more flexible manner and has been re-skinned (by yours truly) to use a new design created by the talented designer Ming Cheung. Another successful team effort!

Test post from droptext

Just experimenting to see if it is possible to create a blog post on my new eatStatic based blog using droptext on my iphone.

Under the current set-up, if I wanted to use the “drafts” folder, I’d then need to log into dropbox.com to move the file to the main folder, as there is no way of moving files in droptext (as far as I can see)

Inserting an image would also be tedious – I can upload a photo using the dropbox app, but I can’t rename it something suitable, without going to dropbox.com, and then writing HTML on an iPhone is never much fun

I think if I want to consider mobile blogging, I’ll need to build something more convenient, such as an email to blog post script, similar to Posterous and wordpress plugins I’ve seen.

Object storage and retrieval in PHP part 2 – MongoDb

In part one, I talked about how to save and retrieve a PHP object instance using JSON files, and in this post I talk about the same operation using mongoDb, and some gotchas.

I’ve only tried this in very limited circumstances, mainly to see how feasible it would be to make eatStatic seamlessly switch between json files and mongo – I naively thought that you would just throw a json file at mongo and have it store it for you, but the examples i’ve found takes the php object and converts to JSON magically, and also passes back a PHP object instance rather than raw JSON.

This post doesn’t cover installing mongo, I skipped between several different examples/ tutorials before I got it working, so can’t remember exactly how I did it in the end. Once installed though you can connect to it from PHP like this:-


$m = new Mongo(); // connect
$db = $m->cms; // select a database


For comparison purposes we’ll create a simple case study object like in Part 1:-


class case_study {
var $id;
var $title;
var $body_text;
var $skills = array();
}

$case_study = new case_study;

$case_study->id = 'my/case_study';
$case_study->title = 'My case study';
$case_study->body_text = 'Some text for the case study';
$case_study->skills['css'] = 'CSS';
$case_study->skills['sitebuild'] = 'Site Build';


which gives us:-


case_study Object
(
[id] => my/case_study
[title] => My case study
[body_text] => Some text for the case study
[skills] => Array
(
[css] => CSS
[sitebuild] => Site Build
)

)


To store this in mongo db, we simply specify a collection to use (“collection” is analogous to table in a relational database, but you can create them on demand, and don’t have to specify a schema) and them insert our object:-


$case_studies = $db->case_studies;
$case_studies->insert($case_study);


To get it back we use:-


$case_study = $case_studies->findOne(array('id' => 'my/case_study'));


Passing this to print_r() gives us:-


Array
(
[_id] => MongoId Object
(
[$id] => 4e6de720d2db288b0c000000
)

[id] => my/case_study
[title] => My case study
[body_text] => Some text for the case study
[skills] => Array
(
[css] => CSS
[sitebuild] => Site Build
)

)


Note a couple of things:-

  • It has given us back an Array, instead of an object
  • It has inserted it’s own unique ID [_id]

We don’t need to worry about the extra ID, as we’ll be using our own for lookups, so it can be ignored. To convert the array to an object, simply do:-


$case_study = (object) $case_study;


Which takes us back to:-


stdClass Object
(
[_id] => MongoId Object
(
[$id] => 4e6de720d2db288b0c000000
)

[id] => my/case_study
[title] => My case study
[body_text] => Some text for the case study
[skills] => Array
(
[css] => CSS
[sitebuild] => Site Build
)

)

Object storage and retrieval in PHP part 1 – JSON files

I mentioned in my post about eatStatic that I was using JSON files for storage of objects and arrays, but hoped to make it switchable to use mongdb. This is part one of a two-part post, demonstrating use of JSON files with json_encode() and json_decode().

Take the following simple class:-


class case_study {
var $id;
var $title;
var $body_text;
var $skills = array();
}


If we create an instance of this and add some data:-


$case_study = new case_study;
$case_study->id = 'my/case_study';
$case_study->title = 'My case study';
$case_study->body_text = 'Some text for the case study';
$case_study->skills['css'] = 'CSS';
$case_study->skills['sitebuild'] = 'Site Build';


and pass it to print_r(), we get this:-


case_study Object
(
[id] => my/case_study
[title] => My case study
[body_text] => Some text for the case study
[skills] => Array
(
[css] => CSS
[sitebuild] => Site Build
)

)


If we now encode it as JSON:-


$json_str = json_encode($case_study);


At this point, we can save the file to the filesystem – I tend to create a unique ID based on the current date/time and a random string. I won’t detail it all here, but you can see some of the helper functions I use in eatStaticStorage.class.php and eatStatic.class.php. One thing worth noting is that sometimes when reading a .json file back in from the filesystem, I was experiencing a bug where the last three characters were omitted – i’m not sure what was causing this, but it was fixed by changing my read_file() method to use file_get_contents(), instead of fread().

Once you have retrieved your JSON string you can decode it again:-


$case_study = json_decode($json_str);


and we end up with this:-


stdClass Object
(
[id] => my/case_study
[title] => My case study
[body_text] => Some text for the case study
[skills] => stdClass Object
(
[css] => CSS
[sitebuild] => Site Build
)

)


Notice that the array “skills” is now an object. We can set it back to an array using get_object_vars():-


$case_study->skills = get_object_vars($case_study->skills);


nb: this only happens for key => value arrays, if it was just a simple array e.g. array(‘css’,’sitebuild’), we wouldn’t need to pass it through get_object_vars(), as it would be maintained as an array.

This gets us back to where we started:-


stdClass Object
(
[id] => my/case_study
[title] => My case study
[body_text] => Some text for the case study
[skills] => Array
(
[css] => CSS
[sitebuild] => Site Build
)

)


sort of – we now have an object instance with all the attributes of the original object instance, but it doesn’t know it is a case_study object. In fact it isn’t a case_study object instance at all – we would have to create a new instance of case_study and copy the attributes across if we needed the real thing, but if you just want the data, this can be used as it is in most cases.

The above example is very simple, but it can get quite complex when your object contains arrays of objects, which in turn may contain arrays (and arrays of objects). The initial cheap and convenient trick of encoding an object instance and saving it, then retrieving, decoding and using it can then get quite hairy, but still less effort than splitting it out into different objects and maintaining in several different relational database tables.

In part two i’ll talk about how to use mongoDB to save and retrieve object instances in PHP.

Site building workflow challenges – keeping HTML in a database

I was reminded to today of one of my pet hates – coordinating a site build, or a site rebuild when the CMS you are using keeps content, often containing HTML markup from use of an editor such as tiny mce, in the database.

Consider the following scenario:-

  • You have a staging site where the client has been using the CMS to input content
  • Meanwhile, you make some changes to the database on your local version and want to push them to staging
  • You can use a migration script to push your changes to the live database but you find yourself wanting to also copy the new content back to your local database, so you can work on CSS on real content. You then would probably drop your local database and restore from a backup from staging, losing any test content you put in locally

It’s basically a bit of a kerfuffle.

This is one of the scenarios that I hope could be avoided with a CMS based on eatStatic (if I ever develop it beyond a blog engine) – any content-types that contain bodies of text, whether thay are marked up with HTML or not, would be stored on the filesystem. This could be put under version control, so you can selectively synchronise your content with another instance of the site.

I can also see a case for some add-on for any existing CMS – an export function that routinely pushes text content from the db into text files to be kept under version control, and also allows import, allowing instances to selectively sync content.

Introducing eatStatic blog engine

creating a new blog post in textmate

Recently I ported this blog from an ancient version of wordpress to my own simple blog engine, which uses my PHP5 micro-framework, “eatStatic”. I use the phrase “blog engine” rather than blog software, as it isn’t really packaged up yet as something I would describe as software – its more just a collection of classes and templates that can be used to keep a blog.

The bulk of the code was written last year in the space of a couple of hours while sitting in a garage waiting for my car to be fixed – I was about to go on a long road trip and wanted a blogging solution that let me create blog posts and organise photos offline and then conveniently sync it to the live site when I had an internet connection. The result was my “on the road” blog about mobile working.

The thing that sets this apart from other blog engines (and the origin of the name “eatStatic”, along with a nod to a 90’s techno act), is that instead of using a relational database to store content, it uses simple text files for blog posts, and cached json files to store collections of data (e.g. post lists, tag references etc.). I have it set up to run with dropbox so that I compose my posts in textmate and they are synced to a dropbox folder on the webserver. You don’t have to use dropbox though – you can use any technique you like to upload the data files to the server – for “on the road” I use subversion, which means I also have versioning of blog content. Draft posts are composed in a drafts folder and moved into the main posts folder to push them live. There is currently no admin area on the site, though I might add one later.

The published date and URI for each post are taken from the text file name – i’ve adapted it for this blog to use the same url scheme as wordpress to avoid link rot on legacy content. Some people asked me why I don’t just use the title and created/ modified date of the text file to make it even simpler, and the answer is that I wanted finer control, and the option to specify the publish date – using created/ modified would have been a disaster for the content I imported from wordpress. Also by naming each file starting with YYYY-MM-DD, the post files are easier to sort/ find in the post folder, both visually/ manually and in code. You can use HTML in the blog post and additionally line breaks are converted to br tags, other then immediately after a closing tag. You can add tags and metadata at the end of the text file.

I’ve also got a simple thumbnail gallery which can be included in a post (see below) by uploading a folder full of full-size images with the same name as the post. The idea behind this is that a set of jpeg/ png images can be imported from a camera, and automatically pushed to the server by dropbox. A caching script creates the thumbnails and web-size version on demand, which are saved to the filesystem for efficiency during subsequent requests. I considered setting it up so that each post had it’s own folder, which could then contain images, but the blog engine was mostly written with the idea of quickly creating posts by opening textmate/ emacs, writing and saving rather then faffing around with creating folders.

I made the decision not to build in any commenting functionality – the anti-spam / moderation features needed are too much of a pain to deal with, so i’ve archived the old wordpress comments into the post body and integrated disqus instead.

As I mentioned before, I’ve been using a previous version of eatStatic successfully for my “on the road” blog, but I wanted to see how it coped with 100’s of posts rather then just a handful – it seems to be doing fine, coping with over 600 posts, but i’m sure there is room for improvement. I’ve also been investigating making the json read/write switchable to use mongodb so that it could potentially be very scaleable – i’ve encountered a few inconsistencies in the way that PHP json_decode() and mongodb object retrieval work, but nothing that can’t be worked around – expect a blog post on that later!

I don’t expect eatStatic blog to be a wordpress killer, but it may appeal to techie types who want a lightweight PHP5 blog engine, maybe to plug into an existing site and people who want to compose posts in textmate/ emacs (or any other code editor), rather than in a web form. If you are interested in trying it, keep an eye on the github repo, as i’ll commit an example of how this blog is formed, once i’ve ironed out the more embarrassing bugs! I may add a simple admin area at a later date, to allow publishing entirely via the web, and I think it would also benefit from a “post by email” feature, for convenient moblogging, but don’t hold your breath!

When I was importing content (I actually wrote a python script to parse a wordpress xml export file and create the text files), I found it quite fitting that the first ever post on this blog nearly ten years ago was made on a home-brewed ASP blog engine which used XML for data storage. I think before then I kept a static HTML blog of sorts, on a freeserve site, but unfortunately haven’t got a copy of that for completeness.

Lastly, whether or not you want to set up an eatStatic-based blog, if you aren’t already using dropbox, it really is excellent, so why not sign up for free 2GB account using my referral URL, so I can get some more free space? Even though I have a paid dropbox account, I use a second free account to mount on my server for automated site/ database backups and for this blog and it keeps filling up!

Watershed 2011 rebuild

screen grab of watershed.co.uk

Last night the new version of the watershed website was pushed live. I had the pleasure of being one of many people involved in this project, which involved combining several different sites representing different projects within the watershed brand. I did the “first cut” of the HTML/CSS, working from a PSD provided by the design agency Document, and also helped with some of the Drupal theme integration, working alongside some talented watershed staff and other freelancers (i’d name them all here, but would inevitably miss someone).