Create an iPhone icon


Ok, it's not as cool as this, but still. (image c/o JD Hancock on Flickr)

You know that when you add a website to the home screen on the iPhone and you get one of those nifty little icons?  Ever wondered how to create one?  Turns out it is dead simple.  Hey, if I can do it anyone can!  It only takes a few steps and, having created one myself, they do look rather nifty.  So, here’s what you do :

1) Create a 45×45 pixel PNG (my version of Photoshop doesn’t allow the creation of a PNG file so I saved it as a BMP, opened it in Paint and then saved it as a PNG file).

2) Save the file using the filename apple-touch-icon.png.

3) Drop the file in the root directory of your website.

That’s it done.  Now if you go to your site, add it to your home screen your icon should magically appear on your iPhone. Easy!

Oh, I’m not sharing mine yet as it is all part of my secret project 🙂


Photosynth for iPhone

The past couple of days I have been playing around with a new photographic application for the iPhone.  Called Photosynth, the application allows you to create ‘wraparound panoramas’ without the need for a tripod.  Developed by Microsoft (I kid you not), the application is completely free for the iPhone and I have to say I have been very impressed with the results so far (Microsoft does good shock).

To make the most of the application (ie share with friends etc) you will need to sign up for a Windows Live account (in my case, yet another account to sign up for).  Once you have signed up you can upload images to the Photosynth website and share them via Facebook (no option to share via Twitter or Flickr as yet – I would hope that the latter will enable such panoramas to be uploaded eventually via Photosynth, but given the development of Flickr it is unlikely).  So, how does it work?

Basically you simply manoeuvre your iPhone around the scenery and the application automatically takes the images.  Sometimes you need to tap the screen to capture the image, but the majority of the time it will do so automatically.  All you need to do is make sure that you capture the full scene and there are no ‘black spots’ (areas that you haven’t managed to capture will show up as black spaces on the finished panorama).

Viewfinder on Photosync

So, fairly straightforward and easy to use then.  It certainly beats trying to achieve similar results with a standard camera and a tripod.  But what about the quality of the images?

To be honest, I wasn’t expecting much.  As the application ‘stitches’ together the panorama I expected the finished results to be of fairly limited quality.  I was wrong, they are really rather good.  I’ve only taken a couple of panoramic shots so far, but they have certainly surpassed my expectations:

Barajas airport

Metropol Parasol

You’ll need to click on the images themselves to get the idea, but as you can see, they are fairly impressive and only took a minute or two to create.  It is certainly a useful application when attempting to capture those large open spaces that normal photos just cannot quite capture.  Sometimes a simple image cannot really give you a great sense of scale or enable the viewer to imagine what it is like to inhabit a particular space.  Personally I think panoramas are a great way of giving the viewer a sense of what it is like to stand within a particular space.  I think certainly in the case of the Metropol, a single photo of the structure would not really do it justice.

For a free piece of photographic software, Photsynth certainly comes up trumps and I’d heartily recommend it to anyone interested in photography.  It’ll certainly get plenty of usage out of me.  Now, if someone could work on sharing via Flickr I would be a very happy man.

Net neutrality and public libraries

Information is Free. But for how long?

Towards the end of last year, Ed Vaizey addressed a telecommunications conference in London organised by the Financial Times.   In his address, he pointedly failed to give his support for ‘net neutrality’.  In fact, although he has denied it, it would appear that he supports scrapping it altogether.  In a section of the speech on ‘net neutrality’, Vaizey commented:

“Consumers should always have the ability to access any legal content or service. Content and service providers should have the ability to innovate and, most importantly, to reach end users … This could include the evolution of a two-sided market where consumers and content providers could choose to pay for differing levels of quality of service.”

The Guardian goes on to state:

The comments sparked a furore as his words were seen as allowing a two-tier internet in which companies would have to pay to get their content to arrive in timely fashion – a complaint that Erik Huggers of the BBC made last month over the corporation’s iPlayer catchup service.

There’s a phrase that should strike fear in any information professional: “two-tier internet”.  ‘Two-tier’ inevitably means unequal and, consequently, entrenching a divide those that can access the top tier and those that can’t.  But before going any further, what is ‘net neutrality’?

Tim Berners-Lee describes ‘net neutrality’ as follows:

Net neutrality is this:

If I pay to connect to the Net with a certain quality of service, and you pay to connect with that or greater quality of service, then we can communicate at that level.That’s all. Its up to the ISPs to make sure they interoperate so that that happens.

Net Neutrality is NOT asking for the internet for free.

Net Neutrality is NOT saying that one shouldn’t pay more money for high quality of service. We always have, and we always will.

Control of information is hugely powerful. In the US, the threat is that companies control what I can access for commercial reasons. (In China, control is by the government for political reasons.) There is a very strong short-term incentive for a company to grab control of TV distribution over the Internet even though it is against the long-term interests of the industry.

Let’s see whether the United States is capable as acting according to its important values, or whether it is, as so many people are saying, run by the misguided short-term interested of large corporations.

As Berners-Lee suggests, abandoning ‘net neutrality’ could lead to very real dangers in terms of the control of information.  At present the flow of information is neither controlled by the state (as it is in China) or by corporate interests.  The removal of ‘net neutrality’ would change this, leading to corporations controlling access to information – a worrying prospect.

Over in the US, the debate over net neutrality has been waging for some time. Democratic Senator Al Franken has been particularly vocal in defending the principles of neutrality.  As one US blogger puts it:

Net neutrality is, of course, the exact opposite of the freedom-trampling “government takeover” as it is tarred by opponents in the capital. Net neutrality is internet freedom, not its adversary. The doctrine is designed to protect consumers’ rights to access information that is unfiltered and unrestricted by telecommunications companies that stand to profit from what could constitute, come to think of it, a “corporate takeover of the internet”.

“The only freedom they are providing for,” Democratic Senator Al Franken and several colleagues snapped back at Republicans in a recent letter, “is the freedom of telephone and cable companies to determine the future of the internet, where you can go on it, what you can attach to it, and which services will win or lose on it.”

The removal of ‘net neutrality’ could do very real damage to both the Internet as we know it today and seriously impact on the consumer’s ability to access information.  If ISPs are able to discriminate the flow of content there could be very serious consequences and it would undoubtedly be, as the ALA recently put it, ‘a severe violation of intellectual freedom’. Take these examples from The Nation:

Imagine how the next presidential election would unfold if major political advertisers could make strategic payments to Comcast so that ads from Democratic and Republican candidates were more visible and user-friendly than ads of third-party candidates with less funds. Consider what would happen if an online advertisement promoting nuclear power prominently popped up on a cable broadband page, while a competing message from an environmental group was relegated to the margins. It is possible that all forms of civic and noncommercial online programming would be pushed to the end of a commercial digital queue.

This is an even greater consideration in the UK where there are three main political parties and a number of smaller parties that are growing in popularity.  How would the Greens and UKIP, for example, be able to compete if ISPs discriminate against them and in favour of the main political parties?  And if they are able to discriminate, how will we be able to ensure that the consumer receives a range of information rather than just that which is ‘approved’ by the ISP?

As I mentioned above, the effect of a ‘two-tier’ Internet should have very real concerns for all information professionals.  The ALA made their concerns clear in 2006:

First, Network Neutrality is an intellectual freedom issue. The ALA defines intellectual freedom as the right of all people to seek and receive information from all points of view, without restriction. Unfortunately, there is no law that protects intellectual freedom on the Internet today. Internet service providers (such as the cable and telephone companies) have the ability to block or degrade information or services travelling over their networks. If these companies discriminate against certain kinds of information based on the content of the message being delivered, this would represent a severe violation of intellectual freedom.

Second, Network Neutrality is a competition issue. Libraries in the digital age are providers of online information of all kinds. Among hundreds of examples, public libraries are developing online local history resources, and academic libraries allow the online public to explore some of their rarest treasures. Libraries, as trusted providers of free public access to information, should not compete for priority with for-profit history or literature Web sites that might be able to afford to strike deals with service providers. This makes the Network Neutrality debate not only a matter of philosophy and values for librarians, but also of livelihood.

Couple this with some local authorities’ eagerness to close public libraries, and it is clear there are problems ahead.  One of the arguments against the need for a network of public libraries is that we ‘all’ have access to the Internet (of course we don’t but that doesn’t fit the narrative).  This is all well and good at present, but with ‘net neutrality’ under attack and an increasing amount of content being locked behind paywalls, it won’t be long before we find that the Internet as we know it is but a distant memory.

This is, again, yet another reason why libraries and information professionals are so important.  Librarians do not (or at least should not) discriminate on the information they provide their users.  If, for example, a customer visited the library and requested a book on ‘Islamic terrorism’ a librarian would (provided both texts are available of course!) lead you to a copy of both ‘Al Qaeda‘ by Jason Burke and ‘Londonistan‘ by Melanie Phillips and allow the user to decide which one is appropriate for them (the former hopefully!).  It may seem insignificant, but if the information professional was to behave as an ISP ‘unburdened’ by ‘net neutrality’, you would be presented with one or the other, potentially without even being aware that the other was available.  Imagine an information space where access to information was subject to vested interests.  Librarians do not have vested interests, they simply point you to a range of information resources and allow you to decide which is suitable.

Imagine, for a moment, that there are no public libraries and net neutrality is a thing of the past.  Imagine what the implications are for access to information.  Imagine the impact that this would have on our democracy.  Imagine the impact that this would have on society and how it would reinforce the gap between the richest and the poorest.  Sure, you may not think libraries are that important when you have the whole of the world-wide web at your finger tips.  But once paywalls are common place and ISPs are able to discriminate content, you may just realise what you’ve lost.  And don’t be fooled into thinking this is a far-fetched fantasy.  We are only a short step away from this eventuality.  Information has been commodified, once there is money to be made it won’t remain free and open for long.

RFID self-service – is now the right time?

The following was written with help from Mick Fortune, an expert in RFID technology.

What  is RFID?

RFID (Radio-frequency identification) is a type of technology that is often used in self-service equipment to enable library users to borrow and return books themselves.  Although RFID technology is used in self-service, not all self-service equipment uses RFID.

Sounds interesting. Are all RFID systems the same?

Most systems perform much the same tasks but each uses a different RFID “data model” That means that books from one library service cannot be easily be used by another and prevents libraries from using whatever equipment they want. A new UK standard to overcome this limitation was agreed in 2010, but so far no public library service is using it.  Unlike barcodes which, despite using barcode schema can be read by almost any scanner, RFID “tags” contain different data stored it in different ways .

Are there advantages to using barcodes?

Barcodes are a much cheaper and widely recognised means of identifying individual items but have to be borrowed one at a time – with each book usually having to be opened to scan the barcode. Stocktaking also requires staff to remove items from the shelf and scan each barcode separately.

So how does this differ from RFID?

With RFID borrowers can place all of their books on a reading table and borrow them all simultaneously – as many as 15 in some systems. Stocktaking no longer requires items to be handled at all since the tags can be read at a distance and through the covers .

The problem with RFID is that, with there being no data standard different suppliers have chosen different ways to store data on their tags. Data like copy information, owning library, whether the item is part of a set, whether it can be borrowed by anyone or limited to certain age groups etc.

Suppliers dislike this lack of standards as they have to carry out a different process for every library and write different data in different places for each supplier’s hardware to be able to read it.

My authority is introducing self-service to save money.  Will it?

It might. Savings will be made if the machines are used to replace staff (which is what is happening in many cases), but it is not cost effective on its own.  RFID tags cost more than standard barcodes.  (About 10 times as much). They also make the cost of supply higher for the reasons given above.  Factor these two costs into the book purchasing for an entire authority (Kent, for example, added 230,000 items last year) and the costs increase considerably.

Why is the new standard an improvement?

Firstly, it makes it easier for suppliers to process book stock.  Instead of the manpower and time lost through alternating between different types of RFID tag, book suppliers can just apply one type of tag, which would effectively drive down costs to library authorities.

Secondly, as well as driving costs down from the book suppliers end, it also drives down the cost of the tags.  If all suppliers offer the same type of tag, it would drive down costs making the technology cheaper for library authorities

Another advantage is for the future of the library service.  Everyone accepts that a desired outcome for the service is the ability for items to be moved around the country quickly and easily.  By ensuring a standard is applied to tags it makes it much easier for library authorities across the UK (and not just in small consortiums) to share their book stock.

So is RFID a bad thing?

RFID is most certainly a good thing, but investment in an RFID system at the moment that does not use the new standard could be a costly mistake. The new standard will reduce costs, but much of the existing equipment will have to be updated to handle the new standard.

Many thanks to Mick for helping with these questions.  My understanding of RFID is fairly limited so Mick’s input was very gratefully received.

What this means for Kent Libraries*

Mike Hill, councillor responsible for libraries recently stated the following:

“Self issue technology will help us to deliver a more efficient and cost effective library service.

“Over the next 18 months we will cover the £1.5m cost of the project and from that point on save an additional £1m per year.

“As part of these savings will we be taking 83 full time equivalent posts out of our current structure.”

The problem is that the savings of £1 million appear to be as a result of staff cuts, not through supposed efficiencies of self-service.  If KCC did not take 83 full time equivalents out of the structure (equivalent to approx. £1 million off the wage bill), there would be no saving from the introduction of RFID at all, on the contrary, the opposite would be true and it would cost substantially more (even if you took out the cost of the equipment).  As was stated above in the Q&A, the savings councils often announce come from replacing staff, not from the introduction of the actual self-service units.  So, to say that self-issue will be more “cost effective” is slightly misleading.  The “cost-effectiveness” comes from the removal of staff from the structure, not from the equipment.

Effectively then, the self-service units are a convenient excuse to cut staffing.  The sad thing is, if they waited a little longer before introducing the technology, they could have made savings without having to lose 83 full time members of staff.  It seems like the council have made a hurried decision to make some headline savings rather than waiting for the improvements in RFID as outlined above.  Shame is, if they had waited a little longer before introducing the technology, they would not have needed to remove quite so many posts from the existing structure in order to make the equivalent savings.  Patience would have led to savings both in terms of money as well as jobs.

* This section was added after the original post was published.

CardStar – Embrace or Fear?

There was a lot of chatter on Twitter last week with the discovery that an application for the iPhone is offering a new way for borrowers to use their local library serviceCardStar offers users a way of carrying all their barcoded loyalty and reward cards with them without having a pocket full of plastic.  By inputting the barcode details, the application generates the appropriate barcode which can then be scanned in store……straight off your iPhone.  However, it is not only store cards that are catered for by this service, it is also possible to input a library card number and then, theoretically, present your iPhone at the library desk to take out books.

The application already lists Surrey libraries as one of the ‘merchants’, apparently in reaction to a borrower request.  Interestingly, Surrey libraries were unaware that they are listed on the application, this is because CardStar does not inform the relevant organisation that a request has been made.  This is not particularly helpful as library authorities could be listed without their consent or knowledge.  Furthermore, according to the blogger who kickstarted the flurry of Tweets, not many other libraries are aware of the service.  Of course, this presents its own problems for libraries unaware that users have requested that their library card be included on the application.  Should someone visit their local library and present their iPhone to a member of staff who is unaware of the application, there is likely to be an uncomfortable confrontation regarding the validity of the barcode.  In fact, it would appear that there have been some problems already.

@aarontay at Musings about Librarianship has already tracked down a couple of embarrasing incidents involving the application in some libraries in the US:

“Look you, next time you want to take out books bring in the actual card.  I don’t know if this is a real card.  Do you understand me?  I want the card, not the barcode.  Jesus.  begin muttering under breath and shaking head [then back to] I don’t know if this is a real card.”


Do you have your library card?

Oh, yea. Sure. Here it is.

She looked at my outstretched hand with the iPod Touch and appeared unsure of what to do with the scanner in her hand. Taking a deep breath and saying a small prayer, I casually took the scanner from her hand and revealed my agenda to her.

See? I just place this scanner above the barcode displayed on the screen and….

Ummmm you can’t do that here…

No, it works! Trust me! I got it. Let me try one more time….

Excuse me, young man. People are waiting in line.

That’s not the kind of customer service that will win awards, that’s for certain.

The problem is, you can kinda understand the reactions of the staff members in these libraries.  After all, if you were presented with some new tech like this that you were previously unaware of, you would quite possibly refuse to even entertain the idea that these are valid library cards.  Besides, even if you were aware of the tech, there would still be reservations regarding security.  How can anyone know if the barcode number presented before them is genuinely the card number for the customer they are serving?  After all, it is just a case of jotting a card number down on the iPhone.  It’s no more valid than scrawling a barcode on a piece of paper and handing it over to a member of staff.  Clearly there are security concerns that have to be resolved and policies to be developed in relation to this application.  That’s not to say it is a thing that libraries (or frontline library staff) should fear.  Anything that makes the customer’s experience easier should be considered an advantage to the service.

Having said that, there is no guarantee that the application will work in all libraries anyway.  Judging by the tweets flying around on Friday last week, it was a bit hit and miss with some scanners.  It certainly seemed that those who tested the application on old scanners had more luck than those with new ones.  I tried to find out the reason for this from CardStar on Twitter, but it was more complicated than a 140 character tweet (obviously, should have worked that out myself!).  I have consequently emailed their support desk to ask for further info, so should find out why this is the case in due course¹.

Personally, I think @aarontay is spot-on with his conclusion.  It is important for libraries to be prepared for the use of this technology as any iPhone owner could stroll in with their iPhone and expect to take out their books using the CardStar application.  The most important thing is to ensure that the examples above are not repeated – that would be a disaster.


1. I received a reply from CardStar explaining the situation with the hit and miss nature of scanning the iPhone.  They said:

The first thing to note is that handheld scanners (where you can direct the laser towards the phone) tend to work much better.  Because a lot of the laser light is lost when scanning from an LCD screen, the best laser scanners are the more high-powered ones (which typically correlate with “more expensive”).  We have found the most success with handheld scanners from Symbol.

They also requested that I send them the make and model of any scanners that are incompatible so they can test them in their lab.  Finally, they added:

We are actively trying to improve scanning rates in CardStar, and as we make advancements we will push them into newer versions of the software.

Looks like CardStar are aiming to be around for a while and to develop their product. Could be interesting times ahead.

The Guardian App for the iPhone

So I guess most people will be aware by now that there is now a Guardian application for the iPhone (you can hardly miss the ads in today’s paper).  I spent some time mulling over whether to add the application as, unlike a number of other newspaper applications, the Guardian app is not free.  However, I was eventually convinced that it was worth shelling out £2.39 (I know, it sounds pathetic doesn’t it? Not exactly a king’s ransom, but when you are used to free apps……).  So, what’s it like?

Well, I have to say it is pretty impressive.  There are plenty of neat things on there to make it well worth forking out for.  Needless to say, shortly after downloading it I was wondering why I spent so long thinking about it!  The home page is fully customisable, allowing you to select which sections of the paper you want to see when you first connect up to a maximum of six sections.  As well as selecting which sections you want on your home page, you can select how many stories you want to appear under that section from a minimum of one to a maximum of six.   Personally I think three or four is enough as you don’t want to be bombarded with a whole load of stories on the front screen.

The Guardian App Homepage

As you can see from the toolbar on the image above, the application also allows you to listen to a number of podcasts.  This is particularly handy in my eyes as it is much easier to just click on the audio and listen to it than going through the relative hassle of downloading from iTunes or from the website.  Just click on the audio link of your choice and, within seconds, it is streaming through your iPhone – utter simplicity.

Audio Menu

There are also a whole host of other options that make this well worth downloading (even at a whopping £2.39), including:

  • download stories and read offline
  • add stories to a favourites folder
  • send to Facebook
  • most viewed stories
  • increase text size, and
  • share via email.

The one glaring omission for me is that you cannot as yet share via Twitter, which is a bit odd considering how it has grown as a medium for information sharing in recent years.  Still, it is early days yet and it may come in a future update*.

One thing that makes the app all the more interesting is that it comes on the back of suggestions by Murdoch that his news media will soon be charging for access.  I still have no idea how this would work (how will they stop a subscriber from copy/pasting whole articles and posting on a blog?), but it seems to be something that is still being seriously considered at News International (which should worry all those concerned about access to information).  One wonders how this application would impact on that.  Certainly a recent report claimed that iPhone users are more likely to buy for content than the casual internet user.  Although that report was published by…….The Guardian (perhaps the decisions had already been made on the application when that report went to press).  Whatever the ramifications of this move, it is sure to become a well used app on my iPhone.

* Apparently a ‘Twitter This’ function is in the pipeline (thanks @ostephens).

Riding High Amongst the Waves*

Google Wave seems to be just about the hottest thing on the internet at the moment.  People have been eagerly waiting that magic invite dropping into their inbox just so they can get onboard the Next Big Thing.  Fortunately for me, I was lucky enough to receive an invite care of a fellow Tweeter (Twitter certainly has its advantages!).  So what is Google Wave actually like?

Google Wave - The Next Big Thing?

Google Wave - The Next Big Thing?

Well, to be honest, I haven’t spent a great deal of time on it so far so I’m not really in a position to give a full and fair appraisal.  That said, I’m still going to share some initial thoughts on it.  The first thing I feel I should point out is that it is quite bewildering when you first start playing with it.  When presented with the homepage (see image above), it took a little while to work out how it was supposed to work.  That is maybe why Google recommends you watch a ridiculously long video before you even contemplate diving in (do you see what I did there?!).  However, I did find this handy little video that talks you through some of the main features of Google Wave:

Although it is a little confusing to start off with, there is potential there for it to be a very useful collaborative tool.  By inviting others to join you on a ‘wave’ you can work together on a shared piece of work or just communicate in real-time (a bit like MSN Messenger but you can actually see what they are typing as they type it).  There are a number of gadgets that can be incorporated into ‘waves’ including Sudoku puzzles and chess.  It is also possible to embed Google Maps which enables people on the same ‘wave’ to collaborate on a map (which is quite useful and very easily done).  I think Mashable’s description of Google Wave sums it up quite nicely:

It combines aspects of email, instant messaging, wikis, web chat, social networking, and project management to build one elegant, in-browser communication client.

So pretty much all the best elements of Web 2.0 rolled into one.

It will be interesting to see how Google Wave develops over the coming weeks and months as more people get onboard.  It certainly has great potential to be a very useful tool, as long as people are prepared to overcome the initial hurdles.  I’ll certainly continue to play around with it and share more thoughts on it as time goes by.  Hopefully I’ll be able to share something a little more comprehensive than this effort!

There is also a Complete Guide to Google Wave available that may also help with getting to grips with it.

* I have been wondering how I would incorporate my love of Pearl Jam into one of my blog posts….looks like I managed it!