Jan 31, 2012

Could facial recognition technology destroy 'redundant and bankrupt' passwords?

Remembering complex passwords could be a thing of the past if facial recognition technology takes off - but is it as secure as a password and does it work?

When Google showed off its latest operating system for smartphones in October last year, "Ice Cream Sandwich", many were excited to see a feature that allowed the ability to unlock a phone using just a user's face. But as soon as the feature was shown off, it didn't take long for people to begin to point out how insecure it was, demonstrating that holding up a picture of an authenticated user to a mobile phone's camera would allow anyone with access to a photo of that user to unlock it with ease.

It raises the question: if one of the world's largest technology giant's implementation of facial recognition technology can be tricked so easily, will the idea ever take off as a secure authentication mechanism?

Some young entrepreneurs from Dublin in Ireland seem to think so. Although they are yet to publish any evidence supporting their claim, they say they plan to release about August this year technology that any website can use to enable its users to log in by presenting their face to their computer's web camera.

Co-founder of the Ireland startup named Viv.ie, Niall Paterson, is attempting to create the technology with several other friends which he hopes will "destroy" passwords.

"I was on Facebook logging in with my password which is 21 characters long and I got it wrong and I thought that there had to be a better way," said Niall, 17, in an interview yesterday. "Instead of using passwords the aim . . . will be that passwords will be eliminated and you will be able to log in just using your face.

"A web camera would take a picture of your face, analyse it, and if you've been registered already, log you in."

Security experts, however, remain sceptical over whether it would work in practice and if it would be secure enough.

Paul Ducklin, of security firm Sophos, said the primary problem with Viv.ie was that evidence of how reliable the system was relied entirely on the "unsupported claim of one of the inventors that it is 'impossible to crack'". The second problem was "whether you'd want your Facebook identity tied to your face".

Questioned on security, Niall claimed Viv.ie used the image editing software ImageMagick that could detect whether an image was 2D or 3D. "We feel that a 2D picture of a face will sort of be exposed [by the software]." A number of other security measures were also being implemented, he said, such as detecting whether the images being fed into a computer were that of a web camera or software acting as a fake video stream.


Jan 30, 2012

Help stamp out quackery

MORE than 400 doctors, medical researchers and scientists have formed a powerful lobby group to pressure universities to close down alternative medicine degrees. Almost one in three Australian universities now offer courses in some form of alternative therapy or complementary medicine, including traditional Chinese herbal medicine, chiropractics, homeopathy, naturopathy, reflexology and aromatherapy.

But the new group, Friends of Science in Medicine, wrote to vice-chancellors this week, warning that by giving "undeserved credibility to what in many cases would be better described as quackery" and by "failing to champion evidence-based science and medicine", the universities are trashing their reputation as bastions of scientific rigour. The group, which names world-renowned biologist Sir Gustav Nossal and the creator of the cervical cancer vaccine Professor Ian Frazer among its members, is also campaigning for private health insurance providers to stop providing rebates for alternative medical treatments.

A co-founder of the group, Emeritus Professor John Dwyer, of the University of NSW, who is also a government adviser on consumer health fraud, said it was distressing that 19 universities were now offering "degrees in pseudo science". "It's deplorable, but we didn't realise how much concern there was out there for universities' reputations until we tapped into it," Professor Dwyer said. "We're saying enough is enough. Taxpayers' money should not be wasted on funding [these courses] … nor should government health insurance rebates be wasted on this nonsense."

Professor Dwyer said it was particularly galling that such courses were growing in popularity while, at the same time, the federal government was looking at ways to get the Therapeutic Goods Administration to enforce tougher proof-of-efficacy criteria for complementary medicines, following the release of a highly critical review by the Australian National Audit Office last September.

A Question of Currency: Should Australians Invest in the Fourth Reich?

The Chinese character for 'crisis' is made up of two other characters - 'danger' and 'opportunity'. Europe's sovereign debt kerfuffle is obviously a crisis, and it sure is dangerous. But is it an opportunity for Australian investors?
The short answer is a definitive 'maybe'. This Daily Reckoning will explore why. After explaining why we called Europe the 'Fourth Reich' in our title.
It's a term that Adolf Hitler popularised. 'Reich' more or less means empire in German. The First Reich being the Holy Roman Empire, the second a German monarchy between 1871 and 1918, and the third Hitler's own. The Fourth Reich, as you might have guessed, is the European Union.
Who's in charge this time around? The newly undemocratically elected President is none other than a German who goes by the name 'Schulz'. Presumably not the one of Hogan's Heroes fame.
Sergeant Schultz's personal motto at the POW camp he works at is 'I know nothing, I see nothing.' The POWs Schultz guards know how to take advantage of this. In much the same way, the Greeks knew how to take advantage of the same mentality at the EU when it came to deficits.
Anyway, back to investing in the Fourth Reich.
The first thing to think about, if you're a foreign investor in another region, is currencies. Well, it may not quite be the first thing, but here is why it deserves the top spot in this article: movements in currencies can make or break investment returns. Not just in the sense that you will have to convert your hard won gains or blameless losses in foreign markets back to Aussie dollars at a changing exchange rate.
No, currency moves can strongly influence the actual returns before they're even converted to your domestic currency. We've written about this in the past. The Aussie dollar gold price isn't up as much as its American equivalent because of the Aussie dollar strength. That's an example of currency moves determining entirely domestic investment returns for Australians. The ASX200 suffered a similar fate when compared to indices in the UK and US. It underperformed badly, probably because of the Aussie dollar's rise.
So currency moves can influence your foreign investments in their return and in their conversion back to your own currency. But how do things look between the Euro and Aussie dollar? We asked Slipstream Trader Murray Dawes what he thought about this one year chart of the AUD/EUR.
AUD/EUR 1 Year
AUD/EUR 1 Year

Source: Yahoo Finance


But Murray turned out to be busy taking advantage of the AUD/USD exchange rate in Hawaii. Not that the picture isn't clear anyway. The Aussie has taken off to the upside since mid December last year. And it's up around 70% from its low reached during the panic of 2008. So the momentum and trend indicate a strengthening Aussie dollar relative to the Euro. And US dollar, to a lesser extent recently.
What's odd about this is that, in normal times, this would be interpreted as positive for the Australian stock market. Money flowing into Australia should push up asset prices here. But that's not what we saw. Instead, asset prices fell to offset the rising Aussie dollar. So perhaps the rising Aussie dollar is really telling us about USD and EUR weakness? Both currencies are being printed at breakneck speeds after all.
If that trend continues, investing in Europe would see you lose some of the value of your Euros once you bring them home to Australia.
The flipside to this is supposed to be that the Aussie dollar could crash if another crisis breaks out. That would enhance your returns in Europe. A 50% fall in the AUD/EUR exchange rate would be a rather nice return. Assuming your investment fell less in Euro terms.
There's a slight hiccup with this strategy. If Europe and the Euro are the trigger for this crisis, it could see the common currency plummet instead. Perhaps alongside the Aussie dollar. Investing in Europe could turn out to be a lose/lose scenario.
That would be ironic, as our opening statement is supposedly a misconception. The Chinese character for 'crisis' is not made up of the characters 'danger' and 'opportunity'. Instead, it features 'danger' and a character that 'means something like incipient moment; crucial point (when something begins or changes)' according to this website.
In other words, the opportunity you think you have could just be a dangerous crisis.

Jan 29, 2012

All hail mighty Aussie dollar, as it's here to stay

This year we'll see more painful evidence of Australian businesses accepting the new reality: our dollar is likely to stay uncomfortably high for years, even decades. It has suited a lot of people to believe that just as the resources boom would be a relatively brief affair, so the high dollar it has brought about wouldn't last.

If there were no more to the resources boom than the skyrocketing of world prices for coal and iron ore, that might have been a reasonable expectation. But the extraordinary boom in the construction of new mining facilities makes it a very different story.

The construction boom is likely to run until at least the end of this decade, maybe a lot longer. The pipeline of projects isn't likely to be greatly reduced by any major setback in the world economy. That's particularly because so much of the pipeline is accounted for by the expansion of our capacity to export natural gas. The world's demand for gas is unlikely to diminish.

Last time I looked, the dollar was worth US105¢, compared with its post-float average of about US75¢. But that's not the full extent of its strength. At about 81 euro cents and 67 British pence it's the highest it's been against those currencies for at least the past 20 years.

In the context of the resources boom, the high exchange rate performs three economic functions. First, it helps to make the boom less inflationary, both directly by reducing the prices of imported goods and services and indirectly by lowering the international price competitiveness of our export- and import-competing industries.

Second, by lowering the prices of imports, it spreads some of the benefit from the miners' higher export prices throughout the economy. In effect, it transfers income from the miners to all those consumers and businesses that buy imports, which is all of them. So don't say you haven't had your cut.

Third, by reducing the price competitiveness of our export- and import-competing industries, it creates pressure for resources - capital and labour - to shift from manufacturing and service export industries to the expanding mining sector.

That is, it helps change the industry structure of the economy in response to Australia's changed ''comparative advantage'' - the things we do best among ourselves compared with the things other countries do best.

As businesses recognise the rise in the dollar is more structural than temporary and start adjusting to it, painful changes occur, including laying off workers. Paradoxically, this adjustment is likely to raise flagging productivity performance.

Economists have long understood that the exchange rate tends to move up or down according to movement in the terms of trade (the prices we receive for exports relative to the prices we pay for imports). This explains why the $A has been so strong, for most of the time, since the boom began in 2003.

But here's an interesting thing. In the December quarter of last year, our terms of trade deteriorated by about 5 per cent as the problems in Europe caused iron ore and other commodity prices to fall. They probably fell further this month.

This being so, you might have expected the $A to fall back a bit, but it's stayed strong and even strengthened a little. Why? Because when the terms of trade weakened, other factors strengthened. The main factor that's changed is the rest of the world's desire to acquire Australian dollars and use them to buy Australian government bonds.

Indeed, the desire to hold Australian bonds was so strong it more than fully financed the deficit on the current account of the balance of payments in the September quarter. It may have done the same in the December quarter. Among the foreigners more desirous of holding our bonds are various central banks.

Remember that, at the most basic level, what causes the value of the $A to rise on any day is that people want to buy more of them than other people want to sell. The price rises until supply increases and demand falls sufficiently to make the two forces equal.

So economists' theories about what drives the value of the $A are just after-the-fact attempts to explain why the currency moved the way it did. We know from long observation that there's a close correlation between our terms of trade and the $A.

Jan 26, 2012

Fixed-term technology contracts on the rise | The Australian

A SURGE in the use of fixed-term contracts to engage technology workers is partly fuelled by employers wanting to "have their cake and eat it too", according to recruiters.

The trend has become more prevalent in the IT sector since late last year, when the European debt crisis worsened. Robert Half Technology associate director Jon Chapman said the use of fixed-term contracts was driven out of either economic uncertainty or a desire to keep down costs. "Where clients either have an urgent piece of project work that they may have historically used a true contractor, they are now looking to use a fixed-term contractor," he said. "Or when they have got a genuinely permanent position, global uncertainty is leading them to hedge and only offer up as a fixed-term contract.

"It is a slight element of clients looking to have their cake and eat it too, because they are looking for someone to come onboard quickly -- be there for only a short space of time and deliver often something quite well-defined -- but look to pay that person like a permanent employee on a salary and not on a contract rate, which would traditionally be higher." Mr Chapman said using a fixed-term contract to fill a genuinely permanent position could exclude a large and high-quality portion of the candidate pool. "If they are in a permanent job they won't jump for just a six-month or one-year commitment from a client," he said.

"And if they are true contractors they probably wouldn't want to take it on either, because the equivalent rates would be higher if they were doing a genuine contract rather than taking on a fixed-term contract." The IT hiring market currently favoured contracts over permanent roles, recruiters said.

"While the two types of requirement will remain broadly even, this third model shows signs of being in vogue this year," Mr Chapman said. Taylor Coulter director Penny Coulter said the model appealed to businesses. "We have seen the engagement of fixed-term employee contracts in the market -- the most attractive form of engagement for an employer, not permanent and not paying contract rates," Ms Coulter said.

Alcami director Jane Bianchini said the trend for organisations to "have the best of both worlds" was not "going down well" with the candidate market. "They are using these mechanisms so they don't pay high contracting rates, but also not to have the risk of hiring an employee and potentially either having to offshore that job function or re-direct the work of that job function to a different division or team and then even having to make that position redundant," Ms Bianchini said. "So the halfway house is these fixed-term hires."

Peoplebank chief executive officer Peter Acheson said Australian companies that experienced the skills shortage of 2006 to 2008 were wary of being caught out again. "We have seen a tightening of the IT market in the past 12 to 18 months so one of the ways of dealing with that is to bring people on either on a contract basis or on a fixed-term contract," he said. "People use permanent hiring to try and lock staff in so they are less prone to shopping the market and are more loyal when candidate markets are tight."

He said as the market improved firms would hire permanent staff.

Jan 25, 2012

Getting the hang of IOPS | Symantec Connect Community

If you are an Altiris Administrator, take it from me that IOPS are important to you. What I hope to do in today's article is to help you understand what IOPS are and why they are important when sizing your disk subsystems. In brief I cover the following,

  • Harddisk basics -how harddisks work!
  • Drive response times
  • Interpreting drive throughputs -what these figures actually mean
  • What IOPS are and why they are so important
  • IOPS calculations and disk arrays

I should state now that I do not consider myself an expert on this topic. However, every so often I find myself benchmarking disks, and I know the curve I had to climb to interpret all the various vendor stats -the information overload can be overwhelming. What I'm going to attempt in this article is to herd together all the salient pieces of information I've gathered over time. With luck, this will help you engage in a meaninful dialog with your storage people to get the performance you need from your storage.

Introduction
Disk Performance Basics
Hard Disk Speeds - It's more than just RPM...
The Response Time
Disk Transfer Rates aka the 'Sequential Read'
Zone Bit Recording
Understanding Enterprise Disk Performance
Disk Operations per Second - IOPS
IOPS and Data
IOPS and Partial Stroking
How Many IOPS Do We Need?
IOPS, Disk Arrays & Write Penalties
Summary
Further Reading

Introduction

If you are looking at IT Management Suite (ITMS) one of the underpinning technologies which needs to be considered in earnest is Microsoft SQL Server. Specifically, you want to be sure that your SQL server is up to the job. There are many ways to help SQL Server perform well. Among them are,

  • Move both the server OS and the SQL Server application to 64-bit
  • Ensure you've got enough RAM chips to load your entire SQL database into memory
  • Ensure you've got enough processing power on-box
  • Ensure the disk subsystem is up to the task
  • Implement database maintenance plans
  • Performance monitoring

One of the most difficult of the line items to get right in the above list is ensuring the disk susbsystem is up to the task. This is important -you want to be sure that the hardware you are considering is suitable from the outset for the loads you anticipated placing on your SQL Server.

Once your hardware is purchased, you can of course tweak how SQL server utilises the disks it's been given. For example, to reduce contention we can employ different spindles for the OS, databases and log files. You might even re-align your disk partitions and tune your volume blocksizes when formatting.

But specifying the disk subsystem initially leads to a lot of tricky questions,

  1. How fast really are these disks?
  2. Okay I now know how fast they are. Err... Is that good?
  3. Is the disk configuration suitable for SQL requirements of ITSM 7.1?

Before we can begin to answer these questions, we really need to start at the beginning...

Disk Performance Basics

Disk performance is an interesting topic. Most of us tend to think of this in terms of how many MegaBytes per second (MB/s) we can get out of our storage. Our day-to-day tasks like computer imaging and copying files between disks teaches us that this MB/s figure is indeed an important benchmark.

It is however vital to understand that these processes belong to a specific class of I/O which we call sequential. For example, when we are reading a file from beginning to end in one continuous stream we are actually executing a sequential read. Likewise, when copying large files the write process to the new drive is called a sequential write.

When we talk about rating a disk subsystem's performance, the sequential read and write operations are only half the story. To see why, let's take a look into the innards of a classic mechanical harddisk.

Hard Disk Speeds - It's more than just RPM...
A harddisk essentially consists of some drive electronics, a spinning platter and a number of read/write heads which can be swung across the disk on an arm. Below I illustrate in gorgeous powerpoint art the essential components of a disk drive. Note I am focusing on the mechanical aspects of the drive as it is these which limit the rate at which we can read data from (and write data to) the drive.

The main items in the above figure are,

  1. The Disk Platter
    The platter is the disk within the drive housing upon which our information is recorded. The platter is a hard material (i.e. not floppy!) which is usually either aluminium, glass or a ceramic. This is coated with a magnetic surface to enable the storage of magnetic bits which represent our data. The platter is spun at incredible speeds by the central spindle (up to 250kmph on the fastest disks) which has the effect of presenting a stream of data under the disk head at terrific speeds.

    In order to provide a means to locate data on the disk, these platters are fomatted with thousands of concentric circles called tracks. Each track is subdivided into sectors which each store 512 bytes of data.

    As there is a limit to the density with which vendors can record magnetic information on a platter, manufacturers will often be forced to make disk drives with several platters in order to meet the storage capacities their customers demand.

  2. The Drive Head
    This is the business end of the drive. The heads read and write information bits to and from the magnetic domains that pass beneath it on the platter surface. There are usually two heads per platter which are sited on either side of the disk.
  3. The Actuator Arm
    This is the assembly which holds the heads and ensures (through the actuator) that the heads are positioned over the correct disk track.

When considering disk performance one of the obvious players is the platter spin speed. The drive head will pick up far more data per second from a platter which spins at 1000 Rotations Per Minute (RPM) when compared with one that spins just once per minute! Simply put, the faster the drive spins the more sectors the head can read in any given time period.

Next, the speed with which the arm can be moved between the disk tracks will also come into play. For example, consider the case where the head is hovering over say track 33 of a platter. An I/O request then comes in for some data on track 500. The arm then has to swing the head across 467 tracks in order to reach the track with the requested data. The time it takes for the arm to move that distance will fundamentally limit the number of random I/O requests which can be serviced in any given time. For the purposes of benchmarking, these two mechanical speeds which limit disk I/O are provided in the manufacturer's specification sheets as times,

  1. Average Latency
    This is the time taken for the platter to undergo half a disk rotation. Why half? Well at any one time the data can be either a full disk rotation away from the head, or by luck it might already be right underneath it. The time taken for a half rotation therefore gives us the average time it takes for the platter to spin round enough for the data to be retrieved.
  2. Average Seek Time
    Generally speaking, when the I/O request comes in for a particular piece of data, the head will not be above the correct track on the disk. The arm will need to move so that the head is directed over the correct track where it must then wait for the platter spin to present the target data beneath it. As the data could potentially be anywhere on the platter, the average seek time is time taken for the head to travel half way across the disk.

So, whilst disk RPM is important (as this yeilds the average latency above) it is only half the story. The seek time also has an important part to play.

The Response Time
Generally speaking, the time taken to service an individual (and random) I/O request will be limited by the combination of the above defined latency and seek times. Let's take for example a fairly mainstream retail laptop harddisk -a Seagate Momentus. From the Seagate website its specifications are,

Spin Speed (RPM) .................. 7200 RPM
Average latency .......................4.17ms
Seek time (Read) .....................11ms
Seek time (Write) .....................13ms
I/O data transfer rate ................300MB/s

Returning to our special case of a sequential read, we can see that the time taken to locate the start of our data will be the sum of the average latency and the average seek times. This is because once the head has moved over the disk to the correct track (the seek time) it will still have to wait (on average) for half a platter rotation to locate the data. The total time taken to locate and read the data is called the drive's response time,

Response Time = (Average Latency) + (Average Seek Time)

I've heard people question this formula on the grounds that these two mechanical motions occur concurrently -the platter is in motion whilst the arm is tracking across the disk. The thinking then is that the response time is which ever is the larger of seek and latency. This thought experiment however has a flaw -once the drive head reaches the correct track it has no idea what sector is beneath it. The head only starts reading once it reaches the target track and thereafter must use the sector address marks to orient itself (see figure below). Once it has the address mark, it knows where it is on the platter and therefore how many sector gaps must pass before the target sector arrives.

The result is that when the head arrives at the correct track, we will still have wait on average for half a disk rotation for the correct sector to be presented. The formula which sums the seek and latency to provide the drive's response time is therefore correct.

Digression aside, the response time for our Seagate Momentus is therefore,

(Response Time) = 11ms + 4.17ms                   = 15.17ms.

So the drive's response time is a little over 15 thousandths of a second. Well that sounds small, but how does this compare with other drives and in what scenarios will the drive's response time matter to us?

To get an idea of how a drive's response time impacts on disk performance, let's first see how this comes into play in a sequential read operation.

Disk Transfer Rates aka the 'Sequential Read'
Most disk drive manufacturers report both the response time, and a peak transfer rate in their drive specification. The peak transfer rate typically refers to the best case sequential read scenario.

Let's assume the OS has directed the disk to perform a large sequential read operation. After the initial average overhead of 15.17ms to locate the start of the data, the actuator arm need now move only fractionally with each disk rotation to continue the read (assuming the data is contigious). The rate at which we can read data off the disk is now limited by the platter RPM and how much data the manufacturer can pack into each track.
Well, we know the RPM speed of the platter, but what about the data density on the platter? For that we have to dig into the manufacturers spec sheet,

This tells us that the number of bits per inch of track is 1,490,000. Let's now use this data to work out how much data the drive could potentially deliver on a sequential read.

Noting this is a 2.5inch drive, the maximum track length is going to be the outer circumference of the drive (pi * d) = 2.5*3.14 = 7.87 inches. As we have 1490kb per inch data density, this means the maximum amount of data which can be crammed onto a track is about,

(Data Per Track)  = 7.87 * 1490 k bits                  = 11,734 k bits                  = 1.43MB

Now a disk spinning at 7200RPM is actually spinning 120 times per second. Which means that the total amount of data which can pass under the head in 1 second is a massive 173MB (120 * 1.43MB).

Taking into account that perhaps about 87% of a track is data, this gives a maximum disk throughput of about 150MB/s which is surprisingly in agreement with Seagates own figures.

Note that this calculation is best case -it assumes the data is being sequentially read from the outermost tracks of the disk and that there are no other delays between the head reading the data and the operating system which requested it. As we start populating the drive with data, the tracks get smaller and smaller as we work inwards (don't worry -we'll cover this in Zone Bit Recording below). This means less data per track as you work towards the centre of the platter, and therefore the less data passing under the head in any given time frame.

To see how bad the sequential read rate can get, let's perform the same calculation for the smallest track which has a 1 inch diameter. This gives a worst case sequential read rate of 60MB/s! So when your users report that their computers get progressively slower with time, they might not actually be imagining it. As the disk fills up, retrieving the data from the end of a 2.5inch drive will be 2.5 times slower than retrieving it from the start. For a 3.5 inch desktop harddisk the difference is 3.5 times.

The degradation which comes into play as a disk fills up aside, the conclusion to take away from this section is that a drive's response time does not impact on the sequential read performance. In this scenario, the drives data density and RPM are the important figures to consider.

Before we move onto a scenario where the response time is important, let's look at how drives manage to store more data on their outer tracks than they do on their inner ones.

Zone Bit Recording
As I stated in the above section, the longer outer tracks contain more data than the shorter inner tracks. This might seem obvious, but this has not always been the case. When harddisks were first brought to market their disk controllers were rather limited. This resulted in a very simple and geometric logic in the way tracks were divided into sectors as shown below. Specifically, each track was divided into a fixed number of sectors over which the data could be recorded. On these disks the number of sectors-per-track was a constant quantity across the platter.

As controllers became more advanced, manufacturers realised that they were finally able to increase the complexity of the platter surface. In particular, they were able to increase the numbers of sectors per track as the track radius increased.

The optimum situation would have been to record on each track as many sectors as possible into its length, but as disks have thousands of tracks this presented a problem - the controller would have to keep a table of all the tracks with their sector counts so it would know exactly what track to move the head to when when reading a particular sector. There is also a law of diminishing returns at play if you continue to attempt to fit the maximum number of sectors into each and every track.

A compromise was found. The platter would be divided into a small number of zones. Each zone being a logical grouping of tracks which had a specific sector-per-track count. This had the advantage of increasing disk capacities by using the outer tracks more effectively. Importantly, this was achieved without introducing a complex lookup mechanism on the controller when it had to figure out where a particular sector was located.

The diagram above shows an example where the platter surface is divided into 5 zones. Each of these zones contains a large number of tracks (typically thousands), although this is not illustrated in the above pictures for simplicity. This technique is called Zone Bit Recording, or ZBR for short.

On some harddisks, you can see this zoning manifest very clearly if you use a disk benchmarking tool like HD Tune. This tool tests the disk's sequential read speed working from the outermost track inwards. In the particular case of one of my Maxtor drives, you can see quite clearly that the highest disk transfer rates are obtained on the outer tracks. As the tool moves inwards, we see a sequence of steps as the read head crosses zones possessing a reduced number of sectors per track. In this case we can see that the platter has been divided into 16 zones.

.

This elegant manifestation of ZBR is sadly hard to find on modern drives -the stairs are generally replaced by a spiky mess. My guess is that other trickery is at play with caches and controller logic which results in so many data bursts as to obscure the ZBR layout.

Understanding Enterprise Disk Performance

Now that we've covered the basics of how harddisks work, we're now ready to take a deeper look into disk performance in the enterprise. As we'll see, this means thinking about disk performance in terms of response times instead of the sustained disk throughputs we've considered up to now.

Disk Operations per Second - IOPS
What we have seen in the above sections is that the disk's response time has very little to do with a harddisk's transfer rate. The transfer rate is in fact dominated by the drive's RPM and linear recording density (the maximum number of sectors-per-track)

This begs the question of exactly when does the response time become important?

To answer this, let's return to where this article started -SQL Servers. The problem with databases is that database I/O is unlikely to be sequential in nature. One query could ask for some data at the top of a table, and the next query could request data from 100,000 rows down. In fact, consecutive queries might even be for different databases.
If we were to look at the disk level whilst such queries are in action, what we'd see is the head zipping back and forth like mad -apparently moving at random as it tries ro read and write data in response to the incoming I/O requests.

In the database scenario, the time it takes for each small I/O request to be serviced is dominated by the time it takes the disk heads to travel to the target location and pick up the data. That is to say, the disk's reponse time will now dominate our performance. The response time now reflects the time our storage takes to service an I/O request when the request is random and small. If we turn this new benchmark on its head, we can invert this to give the number of Input/Output oPerations per Second (IOPS) our storage provides.

So, for the specific case of our Seagate Drive with a 15.17ms response time, it will take at least on average 15.17ms to service each I/O. Turning this on it's head to give us our IOPS yeilds (1/ 0.01517) which is 66 IOPS.

Before we take a look and see whether this value is good or bad, I must emphasise that this calculation has not taken into account the process of reading or writing data. An IOPS value calculated in these terms is actually referring to zero-byte file transfers. As ludicrous as this might seem, it does give a good starting point for estimating how many read and write IOPS your storage will deliver as the response time will dominate for small I/O requests.

In order to gauge whether my Seagate Momentus IOPS figure of 66 is any good or not, it would be useful to have a feeling for the IOPS values that different classes of storage provide. Below is an enhancement to a table inspired by Nick Anderson's efforts where he grouped various drive types by their RPM and then inverted their response times to give their zero-byte read IOPS,

As you can see, my Seagate Momentus actually sits in the 5400RPM bracket even though it's a 7200RPM drive. Not so surprising as this is actually a laptop drive, and compromises are often made in order to make such mobile devices quieter. In short -your milage will vary.

IOPS and Data
Our current definition of a drive's IOPS is based on the time it takes a drive to retrieve a zero-sized file. Of immediate concern is what happens to our IOPS values as soon as we want to start retrieving/writing data. In this case, we'll see that both the response time and sequential transfer rates comes into play.

To estimate the I/O request time, we need to sum the response time with the time required to read/write our data (noting that a write seek is normally a couple of ms longer than a read seek to give the head more time to settle). The chart below therefore shows how I'd expect the IOPS to vary as we increase the size of the data block we're requesting from our Seagate Momentus drive.

So our 66 IOPS Seagate drive will in a SQL Server scenario (with 64KB block sizes) actually give us 64 IOPS when reading and 56 IOPS when writing.

The emphasis here is that when talking about IOPS (and of course comparing them), it is important to confirm the block sizes being tested and whether we are talking about reading or writing data. This is especially important for drives where the transfer times start playing a more significant role in the total time taken for the IO operation to be serviced.

As real-world IOPS values are detrimentally affected when I/O block sizes are considered (and also of course if we are writing instead of reading), manufacturers will generally quote a best case IOPS. This is taken from the time taken to read the minimum amount from a drive ( 512 bytes). This essentially yields an IOPS value derived from the drive's response time.

Cynicism aside, this simplified way of looking at IOPS is actually fine for ball-park values. Always worth bearing in mind that these quoted values are always going to be rather optimistic.

IOPS and Partial Stroking
If you recall, our 500GB Seagate Momentus has the following specs,

Spin Speed (RPM) .................. 7200 RPM
Average latency .......................4.17ms
Seek time (Read) .....................11ms
Internal I/O data transfer rate .....150MB/s
IOPS........................................66

On the IOPS scale, we've already determined that this isn't exactly a performer. If we wanted to use this drive for a SQL database we'd likely be pretty dissapointed. Is there anything we can do once we've bought the drive to increase it's performance? Technically of course the answer is no, but strangely enough we can cheat the stats by being a little clever in our partioning.

To see how this works, let's partition the Momentus drive so that only the first 100GB is formatted. The rest of the drive, 400GB worth is now a dead-zone to the heads -they will never go there. This has a very interesting consequence to the drives seek time. The heads are now limited to a small portion of the drives surface, which means the time to traverse from one end of the formatted drive to the other is much smaller than the time taken it would have taken for the head to cross the entire disk. This reflects rather nicely on the drive's seek time over that 100GB surface, which has an interesting effect on the drive's IOPS.

To get some figures, let's assume that about 4ms of a drive's seek time is taken up with accelerating and decelerating the heads (2ms to accelerate, and 2ms to decelerate). The rest of the drive's seek time can then be said to be attributed to it's transit across the platter surface.

So, by reducing the physical distance the head has to travel now to a fifth of the drive's surface, we can estimate that the transit time is going to be reduced likewise. This results in a new seek time of (11-4)/5 + 4 = 6.4ms.

In fact, as more data is packed into the outside tracks due to ZBR this would be conservative estimate. If the latter four fifths of the drive were never going to be used, the drive stats would now look as follows,

Spin Speed (RPM) .................. 7200 RPM
Average latency .......................4.17ms
Seek time (Read) .....................6.4ms (for 0-100GB head movement restriction)
Internal I/O data transfer rate .....150MB/s
IOPS........................................94

The potential IOPS for this drive has increased by 50%. In fact, it's pretty much compariable now to a high-end 7200RPM drive! This trick is called partial stroking, and can be a quite effective way to ensure slower RPM drives perform like their big RPM brothers. Yes, you do lose capacity but in terms of cost you can save overall.

To see if this really works, I've used IOMETER to gather a number of response times for my Seagate Momentus using various partition sizes and a 512 byte data transfer.

Here we can see that the back of envelope calculation wasn't so bad -the average I/O response time here for a 100GB drive worked out to be 11ms and the quick calculation gave about 10.5ms. Not bad considering a lot of guess work was involved -my figures for head acceleration and deceleration were plucked out the air. Further I didn't add a settling time for the head before it started reading the data to allow the vibrations in the actuator arm to setting down. In truth, I likely over-estimated the arm accelleration and decelleration times which had the effect of absorbing the head settle time.

But, as a rough calculation I imagine this wouldn't be too far off for most drives.

Your milage will of course vary across drive models, but if for example you are looking at getting a lot of IOPS for a 100GB database, I'd expect that a 1TB 7200RPM Seagate Barracuda with 80 IOPS could be turned into a 120 IOPS drive by partitioning it for such a purpose. This would take the drive into the 10K RPM ballpark on the IOPS scale for less than half the price of a 100GB 10K RPM disk.

As you can see, this technique of ensuring most of the drives surface is a 'dead-zone' for the heads can turn a modest desktop harddisk into an IOPS king for its class. And the reason for doing this is not to be petty, or prove a point -it's cost. Drives with large RPMs and quoted IOPS tend to be rather expensive.

Having said that, I don't imagine though that many vendors would understand you wanting to effectively throw the bulk of your drives capacity out of the window. Your boss either...

How Many IOPS Do We Need?

Whilst enhancing our IOPS with drive stroking is interesting, what we're missing at the moment is where in the IOPS spectrum we should be aiming to target our disk subsystem infrastructure.
The ITSM 7.1 Planning and Implementation Guide has some interesting figures for a 20,000 node setup where SQL I/O was profiled for an hour at peak time,

The conclusion was that the main SQL Server CMDB database required on average 240 write IOPS over this hour window. As we don't want to target our disk subsystem to be working at peak, we'd probably want to aim for a storage system capable of 500 write IOPS.

This IOPS target is simply not achievable through a single mechanical drive, so we must move our thinking to drive arrays in the hope that by aggregating disks we can start multiplying up our IOPS. As we'll see, it is at this point things get murky.....

IOPS, Disk Arrays & Write Penalties
A quick peak under the bonnet of most enterprise servers will reveal a multitude of disks connected to a special disk controller called a RAID controller. If you are not familiar with RAID, there is plenty of good online info available on this topic, and RAID's wikipedia entry isn't such a bad place to start.

To summarise, RAID stands for Redundant Array of Independent Disks. This technology answers the need to maintain enterprise data integrity in a world where harddisks have a life expectancy and will someday die. The RAID controller abstracts the underlying physical drives into a number of logical drives. By building fault-tolerance into the way data is physically distributed, RAID arrays can be built to withstand a number of drive failures before data integrity is compromised.

Over the years many different RAID schemes have been developed to allow data to be written to a disk array in a fault tolerant fashion. Each scheme is classified and allocated a RAID level. To help in the arguments that follow concerning RAID performance, let's review now some of the more commonly used RAID levels,

  • RAID 0
    This level carves up the data to be written into blocks (typically 64K) which are then distributed across all the drives in the array. So when writing a 640KB file through a RAID 0 controller with 5 disks it would first divide the file into 10 x 64KB blocks. It would then write the first 5 blocks to each of the 5 disks simulateneously, and then once that was successful proceed to write the remaining five blocks in the same way. As data is written in layers across the disk array this technique is called striping, and the block size above is referred to as the array's stripe size. Should a drive fail in RAID 0, the data is lost -there is no redundancy. As the striping concept used here is the basis of other RAID levels which do offer redundancy, it is hard to omit RAID 0 from the official RAID classification.

    RAID 0's great benefit is that it offers a much improved I/O performance as all the disks are potentially utilised when reading and writing data.

  • RAID 1
    This is the simplest to understand RAID configuration. When a block of data is written to a physical disk in this configuration, that write process is exactly duplicated on another disk. For that reason, these drives are often referred to as mirrored pairs. In the event of a drive failure, the array and can continue to operate with no data loss or performance degradation.
  • RAID 5
    This is a fault tolerant version of RAID 0. In this configuration each stripe layer contains a parity block. The storing of a parity block provides the RAID redundancy as should a drive fail, the information the now defunct drive contained can be rebuilt on-the-fly using the rest of the blocks in the stripe layer. Once a drive fails, the array is said to operate in a degraded state. A single read can potentially require the whole stripe to be read so that the missing drive's information can be rebuilt. Should a further drive fail before the defunct drive is replaced (and rebuilt) the integrity of the array will be lost.
  • RAID 6
    As RAID 5 above, but now two drives store parity information which means that two drives can be lost before array integrity is compromised. This extra redundancy comes at the cost of losing the equivlaent of two drives worth of capacity in the RAID 6 array (whereas in RAID 5 you lose the equivalent of one drive in capacity).
  • RAID 10
    This is what we refer to as a nested RAID configuration -it is a stripe of mirrors and is as such called RAID 1 + 0 (or RAID 10 for short). In this configuration you have a stripe setup as in RAID 0 above, but now each disk has a mirrored partner to provide redundancy. Protection against drive failure is very good as the likelihood of both drives failing in any mirror simultenously is low.You can potentially lose up to half of the total drives in the array with this setup (assuming a one disk per mirror failure).

    With RAID 10 your array capacity is half the total capacity of your storage.

Below I show graphically examples of RAID 5 and RAID 10 disk configurations. Here each block is designated by a letter and a number. The letter designates the stripe layer, and the number designates the block index within that stripe layer. Blocks with the letter p index are parity blocks.

As stated above, one of the great benefits that striping gives is performance.

Let's take again the example of a RAID 0 array consisting of 5 disks. When writing a file, all the data isn't simply written to the first disk. Instead, only the first block will be written to the first disk. The controller directs the second block to the second disk, and so on until all the disks have been written to. If there is still more of the file to write, the controller begins again from disk 1 on a new stripe layer. Using this strategy, you can simultaneously read and write data to a lot of disks, aggregating your read and write performance.

This can powerfully enhance our IOPS. In order to see how IOPS are affected by each RAID configuration, let's now discuss each of the RAID levels in turn and think through what happens for both incoming read and write requests.

  • RAID 0
    For the cases of both read and write IOPS to the RAID controller, one IOPS will result on the physical disk where the data is located.
  • RAID 1
    For the case of a read IOPS, the controller will execute one read IOPS on one of the disks in the mirror. For the case of a write IOPS to the controller, there will be two write IOPS executed -one to each disk in the mirror.
  • RAID 5
    For the case of a read IOPS, the controller does not need to read the parity data -it just directs the read directly to the disk which holds the data in question resulting again in 1 IOPS at the backend. For the case of a disk write we have a problem - we also have to update the parity information in the target stripe layer. The RAID controller must therefore execute two read IOPS (one to read the block we are about to write to, and the other for obtain the parity information for the stripe). We must then calculate the new parity information, and then execute two write IOPS (one to update the parity block and the other to update the data block). One write IOPS therefore results in 4 IOPS at the backend!
  • RAID 6
    As above, one read IOPS to the controller will result in one read IOPS at the backend. One write IOPS will now however result in 6 IOPS at the backend to maintain the two parity blocks in each stripe (3 read and 3 write).
  • RAID 10
    One read IOPS sent to the controller will be directed to the correct stripe and one of the mirrored pair -so again only one write IOPS at the backend. One write IOPS to the controller however will result in two IOPS being executed in the backend to reflect that both drives in the mirrored pair require updating.

What we therefore see when utilising disk arrays is the following,

  1. For disk reads, the IOPS capacity of the array is the number of disks in the array multiplied by a single drive IOPS. This is because one incoming read I/O results in a single I/O at the backend.
  2. For disk writes with RAID, the number of IOPS executed at the backend is generally not the same as the number of write IOPS coming into the controller. This results the total number of effective write IOPS that an array is capable of being generally much less than what you might assume by naively aggregating disk performance.

The number of writes imposed on the backend by one incoming write request is often referred to as the RAID write penalty. Each RAID level suffer from a different write penalty as described above, though for easier reference the table below is useful,

Knowing the write penalty each RAID level suffers from, we can calculate the effective IOPS of an array using the following equation,

where n is the number of disks in the array, IOPS is the single drive IOPS, R is the fraction of reads taken from disk profiling, W is the fraction of writes taken from disk profiling, and F is the write penalty (or RAID Factor).

If we know the number of IOPS we need from our storage array, but don't know the number of drives we need to supply that figure, then we can rearrange the above equation as follows,

So in our case of a SQL Server requiring 500 write IOPS (i.e. 0% READ pretty much) let's assume we are offered a storage solution of 10K SAS drives capable of 120 IOPS a piece. How many disks would we need to meet this write IOPS requirement? The table below summarises the results.

What we see here is a HUGE variation in the number of drives required depending on the RAID level. So, your choice of RAID configuration is very, very important if storage IOPS is important to you.

I should say that most RAID 5 and RAID 6 controllers do understand this penalty, and will consequently cache as many write IOPS as possible, committing them during an idle window where possible. As a result, in real-world scenarios these controllers can perform slightly better than you'd anticipate from the table above. However once these arrays become highly utilised the idle moments become fewer which edges the performace back toward the limits defined above.

Summary

This finally then concludes today's article. I hope it's been useful and that you now have a better understand IOPS. The main points to take away from this article are,

  1. Get involved with your server/storage guys when it comes to spec'ing your storage
  2. The important measure for sequential I/O is disk throughput
  3. The important measure for random I/O is IOPS
  4. Database I/O is generally random in nature and in the case of the Altiris CMDB the SQL profile is also predominently write biased.
  5. Choosing your storage RAID level is critical when considering your IOPS performance. By selecting RAID6 over RAID1 or 10 level you can potentially drop your total write IOPS by a factor of 3.

I should finish with an empahsis that this article is a starter on the disk performance journey. As such, this document should not be considered in isolation when benchmarking and spec'ing your systems. Note also that at the top of the reading list below is a *great* Altiris KB for SQL Server which will help you configure your SQL Server appropriately.

Next in the article pipeline (with luck) will be "Getting the Hang of Benchmarking" which will aim to cover more thoroughly what you can do to benchmark your systems once they are in place.

Referendum To Remove Racial Discrimination From Constitution

The potential problems turn not on what is proposed to be deleted from the constitution but what might be added. The panel proposes that the constitution should contain provisions aimed at securing the advancement of Aboriginal and Torres Strait Islander peoples. At any referendum, this could raise the complex question of who is an indigenous person entitled to such advancement.

In his decision in Eatock v Bolt last year, Federal Court Justice Mordy Bromberg felt the need to address Aboriginal identity when discussing a group he referred to as ''fair-skinned Aboriginal people''. Justice Bromberg accepted that the term Aboriginal Australian applied to ''a person of mixed heritage but with some Aboriginal descent, who identifies as an Aboriginal person and has communal recognition as such''. However, he did not rule out the possibility ''that a person with less than the three attributes of the three-part test should not be recognised as an Aboriginal person''. This is the kind of debate that Australia does not need right now.

Already some Aborigines, whose priorities do not focus on constitutional change, are being criticised for not going along with the panel's proposals. For example, on the ABC TV program The Drum last Thursday, leftist activist Antony Loewenstein attacked Warren Mundine as a ''Murdoch pet who hates everything about mainstream society''. This is mere abuse posing as analysis.

This sort of line of attack against critics, or any allegations labelling Australians as racist if the proposal is rejected for being too complex, would be counter-productive. The 1967 referendum on Aborigines worked because the political timing was correct, the proposal was straightforward and the extremes of left and right were relatively silent.

Economic fixes must offer a fair go for all

When you listen to street interviews with people in the troubled countries of the euro zone, a common complaint emerges: whereas some people waxed fat in the boom that preceded the crisis, it's ordinary workers who suffer most in the bust, and they and even poorer people who bear the brunt of government austerity campaigns intended to fix the problem.

In other words, achieving a well-functioning economy is one thing; achieving an economy that also treats people fairly is another. Economists and business people tend to focus mainly on economic efficiency; the public tends to focus on the fairness of it all.

Fail to fix the economy and almost everyone suffers. But offend people's perceptions of fairness and you're left with a dissatisfied, confused electorate that could react unpredictably.

The trick for governments is to try to achieve a reasonable combination of both economic efficiency and fairness. Fortunately, but a bit surprisingly, the need for this dual approach has penetrated the consciousness of the Organisation for Economic Co-operation and Development - the rich nations' club which is expanding its membership to include the soon-to-be-rich countries.

New research from the organisation deals with ways governments can get their budgets back under control without simply penalising the vulnerable and ways they can improve the economy's functioning and increase fairness at the same time.

Much of the concern about fairness in the hard-hit countries of the North Atlantic has focused on bankers. In the boom these people made themselves obscenely rich by their reckless, greedy behaviour, eventually bringing the economy down and causing many people to lose their businesses and millions to lose their jobs.

But their banks were bailed out at taxpayers' expense - adding to the huge levels of government debt the financial markets now find so unacceptable - and few bankers seem to have been punished. Some have even gone back to paying themselves huge bonuses.

It's a mistake, however, to focus discontent on the treatment of a relative handful of bankers. The fairness problem goes much wider. In most developed countries, the long boom of the preceding two decades saw an ever-widening gap between rich and poor.

In the United States, almost all the growth in real income over the period has been captured by the richest 10 per cent of households (much of it going to the top 1 per cent), so that most Americans' real income hasn't increased in decades.

It hasn't been nearly as bad in Australia. Low and middle household incomes have almost always risen in real terms, even though high incomes have grown a lot faster.

Looking globally, a lot of the widening in incomes has come from the effects of globalisation and, more particularly, technological change, which has increased the wages of the highly skilled relative to the less skilled. But a lot of the widening is explained by government policy changes, such as more generous tax cuts for the well-off.

The euro zone countries need not only to get on top of their budgets and government debt, but also to get their economies growing more vigorously. So the organisation has proposed structural reforms - we'd say microeconomic reforms - which can foster economic growth and fairness at the same time.

One area offering a ''double dividend'' is education. Policies that increase graduation rates from secondary and tertiary education hasten economic growth by adding to the workforce's accumulation of human capital while also increasing the lifetime income of young people who would otherwise do much less well.

Promoting equal access to education helps reduce inequality, as do policies that foster the integration of immigrants and fight all forms of discrimination. Making female participation in the workforce easier should also bring a double dividend.

Surprisingly - and of relevance to our debate about Julia Gillard's Fair Work Act - the organisation acknowledges the role of minimum wage rates, laws that strengthen trade unions, and unfair dismissal provisions in ensuring a more equal distribution of wage income.

It warns, however, that if minimum wages are set too high they may reduce employment, which counters their effect in reducing inequality. And reforms to job protection that reduce the gap between permanent and temporary workers can reduce wage dispersion and possibly also lead to higher employment.

Systems of taxation and payments of government benefits play a key role in lowering the inequality of household incomes. Across the membership of the organisation, three-quarters of the average reduction in inequality achieved by the tax and payments system come from payments. Means-tested benefits are more redistributive than universal benefits.

Reductions in the rates of income tax to encourage work, saving and investment need not diminish the inequality-reducing effect of income tax, provided their cost is covered by the elimination of tax concessions that benefit mainly high income earners - such as those for investment in housing or the reduction in the tax on capital gains. Getting rid of these would also reduce tax avoidance opportunities for top income earners.

So it's not inevitable that the best-off benefit most during booms and the worst-off suffer most in the clean-up operations after the boom busts. It's a matter of the policies governments choose to implement in either phase of the cycle.

You, however, may think it's inevitable that governments choose policies that benefit the rich and powerful in both phases.

But we're talking about the government of democracies, where the votes of the rich are vastly outnumbered by the votes of the non-rich. So if governments pursue policies that persistently disadvantage the rest of us, it must be because we aren't paying enough attention - aren't doing enough homework - and are too easily gulled by the vested interests' slick TV advertising campaigns.

IMF warns of dual shock

ECONOMISTS at the International Monetary Fund have called on Australia's biggest banks to bolster their levels of capital even further, warning that the sector may not be able to withstand the dual shock of a residential property downturn and losses on corporate lending.
The finding follows a stress test of Australia's banking system run late last year by the IMF economists, who modelled the impact of an Irish-style economic crunch.
The paper was released as Westpac yesterday became the latest bank to tap local credit markets, raising $3.1 billion, marking the second big domestic issue of covered bonds in as many weeks. However, with Europe's debt crisis causing stresses in global credit markets, Westpac had to pay a premium to lock in the long-term funding.
Head of the International Monetary Fund Christine Lagarde.
IMF managing director Christine Lagarde: Economists urge Australian banks to boost capital. Photo: Reuters
Credit markets in Europe have stalled in recent months as the region seeks to tackle its sovereign debt crisis. This has pushed up the cost of wholesale funding for banks around the globe.
Westpac had to pay 165 basis points over the benchmark swap rate for the five-year term funding. While analysts called this expensive and likely to further pressure bank profit margins, it was a slightly lower rate than for a similar monster issue by the Commonwealth Bank last week.
The notoriously conservative bank regulator, the Australian Prudential Regulation Authority, has already taken a tough view on bank capital - that is, funds to protect the balance sheet.
It has forced the sector to move to a new set of tougher banking rules, known as Basel III, faster than the agreed global timetable for introduction.
The rules, due to be phased in from next year, are intended to better equip banks to absorb economic and financial shocks by holding more liquid assets, as well as generating more lending from their own deposits.
While the IMF paper's conclusions will have no direct effect on the oversight of Australia's banking sector, the findings will be taken seriously by regulators and politicians.
However, they are expected to be strongly resisted by bank executives, who have been critical of Basel III.
ANZ chief executive Mike Smith and former CBA boss Ralph Norris have argued that putting aside more funds to protect the balance sheet of banks would increase the cost and reduce the funds available for lending.
An IMF research paper by economists Byung Kyoon Jang and Niamh Sheridan said Australian banks were well positioned to meet the minimum capital standards on Basel III. It also concluded that the four big Australian banks had capital well in excess of the regulatory requirements, with high-quality holdings.
However, exposure to ''highly indebted households'' through mortgage lending, together with large levels of short-term offshore borrowing, ranked as key vulnerabilities. ''Combining residential mortgage shocks with corporate losses expected at the peak of the global financial crisis would put more pressure on Australian banks' capital,'' the IMF research paper said. ''Therefore, it would be useful to consider the merits of higher capital requirements for systemically important domestic banks,'' it said.

A spokesman for Treasurer Wayne Swan said the report confirmed that Australia's banks were strong and stable. ''Our banks came through the biggest stress test in 75 years during the global financial crisis, having benefited from years of tough supervision by our world-class regulators,'' he said. ''This is evidenced in Australia's major banks being among only a handful in the world still wearing the AA-rating badge.''

Jan 24, 2012

E-health key trial halted by specifications glitch | The Australian

The National E-Health Transition Authority (NEHTA) halted the rollout of primary care desktop software at 10 trial sites on Friday blaming incompatibility with the national specifications.

It is the latest blow for the Personally Controlled Electronic Health Record (PCEHR) project, which has attracted $466 million in federal funding over two years and is considered vital to efforts to combat preventible and chronic disease.

The national specifications were updated in November and the problems, which have not been detailed, affect most of the Wave 1 and Wave 2 sites: Metro North Brisbane Medicare Local, Inner East Melbourne Medicare Local, Hunter Urban Medicare Local, Accoras in Brisbane South, Greater Western Sydney, St Vincent and Mater Health Sydney, Calvary Health Care ACT, Cradle Coast Electronic Health Information Exchange in Tasmania, the Northern Territory Department of Health and Families, and Brisbane's Mater Misericordiae Health Services.

Only the Medibank Private and Fred IT group sites are unaffected. The Defence Department's Joint e-Health Data and Information also appears to be safe.

NEHTA is expected to renegotiate contracts, keen to salvage what it can from the trial, and determine how to migrate data across to the national system which is due to go live on July 1.

A NEHTA spokesman would not answer specific questions about the issue, but confirmed it was "pausing implementation of the primary care desktop software development".

"NEHTA is acting after internal checks detected issues in the latest release of its specifications in November 2011," he said.

"This is about quality control to ensure absolute confidence in the software being used in the e-Health pilot sites. One of the reasons for having these sites was to test software and "iron out the bugs' prior to the national infrastructure going live."

The spokesman said NEHTA, which was jointly funded by the commonwealth, state and territory governments, was working with the pilot sites and the primary care software vendors to "recalibrate their acclivity within the e-Health program".

"The pilot site and national infrastructure projects have operated in parallel, but neither is a critical dependency for the other project," he said.

"In large projects of this scale, it is not unusual for problems of this type to arise. We are working to manage this situation to ensure the program is delivered."

Australian Medical Association president Steve Hambleton -- whose Brisbane clinic is in one of the affected sites -- said the issue would cause further disruption for practices.

"It's inevitable when you have a national framework for data you will have some sites that are incompatible. The challenge now is how to migrate that data across," Dr Hambleton said.

He said properly introduced PCEHRs would have significant benefits, although the government had yet to address the issue of funding for doctors, who would be responsible for updating and maintaining the records.

A diet for happiness

"You are what you eat" goes the saying – and what you eat affects more than just your physical well-being.
I'm sure this will come as no great shock, but I don't think I've ever finished an exercise session feeling anything but pumped and exhilarated. Buggered, yes, and sore, often, but always all fired up and ready to take on the world.
I've long been an advocate of a good workout to get your head in the right place. Feel-good endorphins and hormones are released by the pituitary gland and hypothalamus at the base of the brain when we exercise, launching us into a "bring it on" state of mind. However, research by an Aussie (of course) academic has concluded that exercise isn't the only choice on offer for good brain function. A 2010 study by Dr Felice Jacka from Victoria's Deakin University found that what we eat can have a profound effect on our mental health in the long term, reducing the risk of depression and anxiety.
Jacka interviewed more than 1000 women regarding their diet and mental-health symptoms. What made this study different was that for the first time the whole diet of the subjects was looked at, rather than just the role of specific nutrients, such as omega-3, magnesium and folate, in relation to depression and anxiety disorders. Interestingly, the results were the same, irrespective of age or socio-economic status – or even exercise.
Personal trainer on the television show The Biggest Loser Michelle Bridges.
Michelle Bridges. Photo: Supplied
The study found that those subjects who had diets high in processed foods and junk food were more likely to suffer anxiety and depression disorders than those who – you guessed it – had wholefood diets high in vegetables, fruit, fish and other lean protein.
Jacka also conducted a study, published in September last year, on adolescents in relation to diet and mental health. With a quarter of young Australians already experiencing mental-health issues, she found that there was a strong suggestion that it may be possible to help prevent teenage depression by getting youngsters to adopt a nutritious, high-quality diet.
What's more, changes in the quality of adolescent diets over two years were reflected in the mental health of subjects. So the kids whose diets got worse over the two years had a commensurate deterioration in their mental health, as opposed to an improvement for those kids who adopted a healthier diet. Wow. And people ask me why I keep banging on about diet and exercise!
If we could rein in the junk-food peddlers, make wholefoods a much cheaper alternative, and each increase our exercise to at least 30 minutes a day, our society would benefit at every level.
Michelle's tip
Start with an experiment – if your diet includes a lot of junk and processed food, go cold turkey for just three days. You will be amazed at how much better you sleep, concentrate, relax and enjoy life. This may just motivate you to change your eating habits for good!

Jan 23, 2012

Seriously pessimistic Preppers


When Patty Tegeler looks out the window of her home overlooking the Appalachian Mountains in southwestern Virginia, she sees trouble on the horizon.
"In an instant, anything can happen," she told Reuters. "And I firmly believe that you have to be prepared."
Tegeler is among a growing subculture of Americans who refer to themselves informally as "preppers." Some are driven by a fear of imminent societal collapse, others are worried about terrorism, and many have a vague concern that an escalating series of natural disasters is leading to some type of environmental cataclysm.
They are following in the footsteps of hippies in the 1960s who set up communes to separate themselves from what they saw as a materialistic society, and the survivalists in the 1990s who were hoping to escape the dictates of what they perceived as an increasingly secular and oppressive government.
Preppers, though are, worried about no government.
Tegeler, 57, has turned her home in rural Virginia into a "survival center," complete with a large generator, portable heaters, water tanks, and a two-year supply of freeze-dried food that her sister recently gave her as a birthday present. She says that in case of emergency, she could survive indefinitely in her home. And she thinks that emergency could come soon.
"I think this economy is about to fall apart," she said.
A wide range of vendors market products to preppers, mainly online. They sell everything from water tanks to guns to survival skills.
Conservative talk radio host Glenn Beck seems to preach preppers' message when he tells listeners: "It's never too late to prepare for the end of the world as we know it."
"Unfortunately, given the increasing complexity and fragility of our modern technological society, the chances of a societal collapse are increasing year after year," said author James Wesley Rawles, whose Survival Blog is considered the guiding light of the prepper movement.
A former Army intelligence officer, Rawles has written fiction and non-fiction books on end-of-civilization topics, including "How to Survive the End of the World as We Know It," which is also known as the preppers' Bible.
"We could see a cascade of higher interest rates, margin calls, stock market collapses, bank runs, currency revaluations, mass street protests, and riots," he told Reuters. "The worst-case end result would be a Third World War, mass inflation, currency collapses, and long term power grid failures."
A sense of "suffering and being afraid" is usually at the root of this kind of thinking, according to Cathy Gutierrez, an expert on end-times beliefs at Sweet Briar College in Virginia. Such feelings are not unnatural in a time of economic recession and concerns about a growing national debt, she said.
"With our current dependence on things from the electric grid to the Internet, things that people have absolutely no control over, there is a feeling that a collapse scenario can easily emerge, with a belief that the end is coming, and it is all out of the individual's control," she told Reuters.
She compared the major technological developments of the past decade to the Industrial Revolution of the 1830s and 1840s, which led to the growth of the Millerites, the 19th-Century equivalent of the preppers. Followers of charismatic preacher Joseph Miller, many sold everything and gathered in 1844 for what they believed would be the second coming of Jesus Christ.
Many of today's preppers receive inspiration from the Internet, devouring information posted on websites like that run by attorney Michael T. Snider, who writes The Economic Collapse blog out of his home in northern Idaho.
"Modern preppers are much different from the survivalists of the old days," he said. "You could be living next door to a prepper and never even know it. Many suburbanites are turning spare rooms into food pantries and are going for survival training on the weekends."
Like other preppers, Snider is worried about the end of a functioning U.S. economy. He points out that tens of millions of Americans are on food stamps and that many U.S. children are living in poverty.
"Most people have a gut feeling that something has gone terribly wrong, but that doesn't mean that they understand what is happening," he said. "A lot of Americans sense that a massive economic storm is coming and they want to be prepared for it."
So, assuming there is no collapse of society -- which the preppers call "uncivilization" -- what is the future of the preppers?
Gutierrez said that unlike the Millerites -- or followers of radio preacher Harold Camping, who predicted the world would end last year -- preppers are not setting a date for the coming destruction. The Mayan Calendar predicts doom this December.
"The minute you set a date, you are courting disconfirmation," she said.
Tegeler, who recalls being hit by tornadoes and floods in her southwestern Virginia home, said that none of her "survival center" products will go to waste.
"I think it's silly not to be prepared," she said. "After all, anything can happen."

It's HMS bric-a-brac! Boat built using hundreds of wooden objects including rolling pins, hockey sticks and a guitar is unveiled | Mail Online

What does a sliver of a guitar played by Jimi Hendrix, a rolling pin, a discarded tennis racket, a Masai warrior's club and a crate used to carry all of Britain's wealth off to Canada during World War 2 have in common?
Answer: They have all been used to create an incredible living archive of memories woven into the very fabric of this beautiful boat.
More than 1200 items have been donated by members of the public as part of inspirational The Boat Project.

Hull of a work of art: the boat built from obscure wooden objects donated by members of the public intended as a living archive of memories has been unveiled near Portsmouth.
Hull of a work of art: the boat built from obscure wooden objects donated by members of the public intended as a living archive of memories has been unveiled near Portsmouth.
Don't fret: A donated guitar is incorporated into the hull of the boat by boat builder Sean Quaill as part of the imaginative project.
Don't fret: A donated guitar is incorporated into the hull of the boat by boat builder Ian Davidson as part of the imaginative project.
The fabulous construction is the south east entry for the London 2012 Cultural Olympiad and it
will visit south coast towns during the Olympics and is based in that most seafaring area of England's coastline, Portsmouth, at nearby Thornham Marina.

It has been handcrafted by a team of boatbuilders using the contributed parts which have included the ordinary and the extra-ordinary.
Original parts from the Mary Rose and HMS Victory sit alongside more mundane objects but which all have their own unique story.
The largest object is an 8ft long plank from a felled yew tree and the smallest is a cocktail stick.
Yacht an achievement: Boat builder Sean Quail works on the project, along the hull you can see if you look closely a tennis racket, a side section of a stool, wooden mallet and a shelf.
Yacht an achievement: Boat builder Sean Quail works on the project, along the hull you can see if you look closely a tennis racket, a side section of a stool, wooden mallet and a shelf.
Other parts include acoustic guitars, a violin, a didgeridoo, a tennis racket, hockey sticks, a toy helicopter, a children’s train track and a Victorian police truncheon.
And hidden among them all is that famed little piece of the guitar which rock legend Jimi Henrix played in 1960.

Each timber item was donated by a family or individual to represent their own personal story and have now been assembled to form the 30ft long hull of yacht.
The unique vessel is the brainchild of artists Gregg Whelan and Gary Winters, who appealed to members of the public to come forward with wooden objects that they would like to see included.

Close shave: A sliver of Jimi Henrix guitar from 1960 has been donated to be incorporated into the boat project. It is intended that it will form a living archive of memories and has been constructed from 1200 wooden items.
Close shave: A sliver of Jimi Henrix guitar from 1960 has been donated to be incorporated into the boat project. It is intended that it will form a living archive of memories and has been constructed from 1200 wooden items.
Boat project
What it's made of: Boatbuilder Sean Quaill at work on the boat. From left, 1: Part of an MFO box used by the 47th regiment of the Royal Artillery; 2: Part of a dinghy built in 1946 in Bangor; 3: Casket from the last beer brewed at The Sussex Brewery; 4. & 6: Parts of crates used to transport all of Britain's wealth (£637million in gold bullion and £1.25 billion in securities) to Canada during WW2; 5: Right of way sign donated by a countryside ranger; 7: A ruler from Hangleton Junior School in Brighton; 8: Crate from C.N. Burnett who lived in China and used this crate to bring jade and porcelain home in 1949; 9: Part of a notice board for Chichester Harbour Conservancy; 10: Part of the cockpit coming from yacht Morning Cloud, which foundered off Brighton; 11: Nautilus was name of a boat that has now been returned to its original name, Nimrod.
They were stunned to receive more than 5,000 visitors to their exhibitions and boat shed with 1,200 items being donated, each with a personal story behind it.

They have spent a year working with four boatbuilders, led by Olympic sailor Mark Covell and designer Simon Rogers, to assemble the two tonne vessel and make it seaworthy.

Using a framework of stripped cedar planks, they created a waterproof hull and cut each of the donations carefully until they fitted together like a jigsaw.

Experts then glued them on the hull, sanded them down so that every item was of the same depth, and sealed them with laminating resin in a process called the West Epoxy System.

Boat project
On track: A section of a child's toy track makes a bold arc in the hull of the boat. The Boat Project, based near Portsmouth, is one of 12 entries to the London 2012 Cultural Olympiad, and will be ready to set sail later this year.
Boat project
Not all that it seems: A useless salad server in the shape of a spanner is embedded in the structure of the boat which was unveiled near Portsmouth.
The finished project, which will contain dozens of different types of wood, has been funded by a £500,000 grant from Arts Council England.

The more unusual items on board include a light cover from the recently decommissioned HMS Ark Royal, a piece of track from the newly-built Olympic velodrome and a walnut Rolls Royce dashboard.

One piece dates back more than 500 years after Patsy Clarke donated a small piece of the Mary Rose which her husband bought when it was raised in 1982.

The current commander of HMS Victory donated a 7ft long plank of teak from the port forward section of the historic ship, taken while it was in dry dock in 1922.

Eric Hinkley donated his home-made scout woggle and compass, which he wore in the opening
ceremony of the 1948 Olympic Games aged 14.

Boat project
Going to plan: A sketch of the boat which has been created for the London 2012 Cultural Olympiad, and will be ready to set sail later this year.
Boat project
Tools of the trade: The equipment that the team used for to create the boat from the myriad wooden objects donated for the ambitious project based near Portsmouth.
The boat even includes two security boxes which were used to carry thousands of pounds worth of gold to Canada for safe-keeping when Britain was under threat of invasion in 1940.

And a Masai warrior’s club that was used to kill lions in Africa was also handed in.
Jesse Loynes, one of the boat builders, said: 'It’s just been the most fantastic adventure.
'It’s essentially like building an enormous jigsaw without a picture on the box to go by.

'We have been so fortunate to get so many amazing donations from the public - I think it was 1,219 items at the last count.

'I remember putting them all outside to have a look at, to figure out how to arrange them all on the boat. They nearly covered the whole car park.

'From there, we tried to fit them together in a way that was balanced and aesthetically pleasing.

'It’s been a real challenge but we’re thrilled with the results.'

The finished boat will be named via a public competition which will be launched in May.

It will then sail along the south coast and stop at Brighton, Portsmouth, Hastings and Margate and will be crewed by a team of eight sailors, including six volunteers nominated by their friends and family, an apprentice and captain Mike Barham, from Gosport, Hants.

The finished boat will be accompanied by a book, detailing all of the donations and the stories behind them.