Saturday, December 15, 2007

Conversion Rates and The Types of Visitors

Wendy Moe and Peter Fader published a paper in March 2003 titled "Dynamic Conversion Behavior at e-Commerce Sites". In it, they talk (among other things) about the types of visitors to e-commerce sites. This struck me because in analytics we tend to lump all visitors together, just because we can't easily define the segments. Moe and Fader mention the obvious: conversion rates on e-commerce sites are spectacularly low compared with physical stores, below 5% in most cases. Any brick-and-mortar store would have closed up within a week at that rate. They analyze why rates are so low, mostly because so many visitors aren't really immediate buyers. Moe and Fader classify visitors into four groups, only one of which is the get-in-get-out immediate buyer type. Note that I've added notes from my own perspective, so Moe and Fader may take issue with how I'm using their categorization.

  • Direct buyers. They come, they choose, they slap down the plastic. They enter knowing what they're looking for. The site can be marginally usable and unattractive, and they'll still probably buy.
  • Indirect buyers. They know generally what they want, but they're browsing. Probably will buy, but will take a while and lots of pages. May be influenced somewhat by site characteristics, but not extensively.
  • Threshold buyers. These aren't ready to buy, but they're curious and skittish. They're window-shopping. For these visitors, site elements are everything. If the site isn't sticky, they'll leave. Likely influenced by usability and attractiveness. Store impression is as important as product.
  • Never-buy visitors. These are seeking knowledge, not product. Not likely to be influenced by site appearance or usability. May look at lots of pages or very few. No intention of buying.
Now, these aren't mutually exclusive. I switch modes. I may go to Amazon to order a book I've been wanting, or I may go there just to see what's inside of a particular book that my campus will order.

The relative size of each group has a huge bearing on site owner strategy, but all four groups are typically crammed into the data and dashboard indiscriminately. In statistics, we call this "conflating populations". We see it often in distributions with multiple "hills". And it makes analysis almost a blind operation. For example, if you're seeing a large per-session page view rate, but the conversion rate refuses to rise from 2%, is it because you haven't satisfied the threshold visitors, or because you have too many never-buys, or because the population is mostly indirect buyers? One solution won't hit all of them, so choosing to optimize something on the site may not be the answer. For example, if you streamline the checkout, that may help to capture more of the threshold buyers, but if they're actually only a small percentage of the visitor count, you won't see much of a bump in conversion.

For very large sites, picking the right optimization strategy may make the difference between a huge loss and massive improvement, six figures or more. So how do we segment these populations? Surveys always beckon to us, but I'm skeptical. Surveys online are always self-selected, and self-selection seems to me to invalidate most survey data. There may be clues in the analytics data itself, but I have yet to find a formula. Moe and Fader propose a formula (in fact that's the major purpose of the paper). It would provide a good basis in the real world if only our figures for new and returning visitors were accurate, and they're decidedly not. Several studies have confirmed that cookie-based figures for new visitors can be off by a factor of 2 or more. Unfortunately, if we don't know who's coming to the site, we can't segment them, and without registration we just can't be sure.

Still, keeping these four categories in mind will help when doing site analysis and optimization. If we can make an informed guess about which category of visitor is dominating, we can advise the client accordingly. For example, if we get a lot of visitors to particular pages that have a lot of information, and the pages are obviously being read, and the product is unusual or truly new, then we may have a lot of never-buys. This intuitive approach isn't completely satisfying to me, but it may be all we have.

Tuesday, November 27, 2007

Scott Adams and the Demise of Common Sense

Scott Adams, the creator of Dilbert, has announced on his blog that he'll be blogging less often. It seems that his original common sensical expectations about how the blog would turn out aren't coming out well at all.

They original expectations included:

1. Advertising dollars
2. Compiling the best posts into a book.
3. Growing the audience for Dilbert
4. Artistic satisfaction.

Of these, only number 4 has worked out. RSS has made visitors go around the ads, the book hasn’t done all that well, and the audience for Dilbert hasn’t been correlated at all with the growth of the blog. As the blog has exploded, the benefits to him haven’t. So he’s talking about blogging less often. It’s a great illustration of how common sense is a lousy predictor of future events. Viva testing and statistics.

Wednesday, November 21, 2007

Numbers Aren't Always Useful

A while back I worked with a client who used a popular service that provides percentage figures for visitation of others' websites. This service contracts with ISPs to get sanitized web visitation figures from its subscribers, some ten million of them last I heard. Then it reports mass figures on who went where. The problem is that it's impossible to find out just what those numbers are - all you get from the service is percentages, which are presumably percentages of the subset of ten million that went to that particular site on any given day. How many is that total? Nobody knows, and the service isn't telling. In my client's case, they were getting figures like .0016%, which is so low that it isn't hardly worth knowing, but they were very keen on it, watching the numbers shift daily from .0016% to .0021%, and cheering lustily at the uptick. Of course, .000016 X 10^6 is only 160 (visits, users? who knows?), and the surge amounted to only 50, a number that didn't make them so cheerful when I pointed it out. And that assumes that the denominator is indeed ten million, which is probably isn't - it's doubtful that all ten million subscribers are online on any given day. The actual improvement might have been as little as 10, or less.

The problem is taking percentages as real numbers. They need to be scrutinized, and you need to know what the denominator is. We called the service to find out just what those percentage figures meant, but they either wouldn't, or couldn't, tell us. The people we talked to were frankly ignorant of simple statistics, calling the ten million subscribers a "sample". I had to tell them that it wasn't a sample, it was a sampling frame, and that without knowing the denominator for the percentage, the tiny percentages the client tracked were all but meaningless. The service had no answer, and the call ended unsatisfactorily. And the client continues using the service to this day, and happily reporting the results to higher-ups.

The service may be worthwhile for larger sites that get a sizable percentage of whatever data points the service is tracking, but any sites smaller than that are probably not getting their money's worth. I used to think that people with marketing degrees would be more conscious of the fallibility of numbers and the need to understand analytics, but I've been wrong more often than right.

Sunday, November 18, 2007

Common Sense Sucks

As HCI practitioners who believe in making technology slicker to use, our biggest opponents may be budgets, but coming up strong on the outside is stupidity based on common sense. Human psychology is a weird and wonderful thing, and I love studying it because there's always something unexpected waiting to mug you around the corner. Where others love bar fights, I love getting pasted by new knowledge. And research has shown for a long time that common sense is a lousy predictor of future events. You'd think that humans would be good at prediction by now, but we're not. For one thing, we think in linear terms - one cause, one effect. But the universe is gloriously nonlinear, and every event has many causes. Further, humans tend to predict based on what's going on now. When we ask "how would you like this?" or "how would you feel about this?", the respondent has to extrapolate based solely on how he feels at the moment. When we put this to the test, we find that the respondents don't really feel that way later on. This is why I put no faith in predictive surveys.

When those in power use common sense to make sweeping laws or spend huge sums of money, they too usually screw the pooch. The Freakonomics blog has a short piece on how abstinence-only sex education has actually resulted in more teen pregnancies, something any psychologist could have predicted. The urge to merge is far too strong to educate away, especially in adolescents who haven't yet developed much control over their impulses. The victims of abstinence-only education aren't given the tools to prevent disease or pregnancy, but they're driven to give in to the mating call anyway, resulting in more pregnancies. It's obvious from studies that abstinence-only ed doesn't work, and New York has appropriately dropped it, despite losing millions in federal funds. But the US Congress stubbornly sticks to the plan. Common sense is dooming teenagers and costing millions, all with no substantial foundation, but that doesn't matter.

The lesson for us is to mistrust common sense, both our own and our clients'. I've seen clients cling to old, unusable designs simply out of faith. Analytics, psychological principles, and user testing will eliminate most of the problems if they're used, but they won't be applied if common sense has anything to say about it.

Sunday, November 11, 2007

Why a Messy Room is a Good Thing

I saw a comic the other morning that featured a young man and a thoroughly trashed room, with items strewn all over the place. In doubtless a whiny voice, the youngster pleaded "but when I put things away, I can't find anything". Parents will smile smugly in silent rebuttal, but an HCI'er and part-time economics junkie like me can't help but wonder if those legions of room-messers don't have a point. When we humans do things frequently and in large numbers, there's usually something to it.

Advocates for clean rooms (like rabid inspecting drill sergeants) like to say that everything has its place, and that's where it should be. But is that really true? After you reach a certain threshold of object ownership, is there a single place for everything? Children mostly keep toys in a box, where toys on top obscure the ones underneath. Folded clothing suffers the same fate in a drawer, where you have to pull out the stuff on top to get to the items below.

In effect, it seems to me that a messy room is actually a form of shallow navigation where little is hidden badly enough to be overlooked. The same pertains to a cluttered desk. I often keep a wide, low pile of file folders, papers, notes, books, and pads on half my desk. A neat freak might object that I could just as easily keep all that in their respective drawers and bookcases, but I'm convinced (without much evidence, I have to add) that doing so is less efficient. I can riffle through the pile faster than I can flip through file folders in a file drawer, even in alphabetical order. The pile is for things I'm using frequently at the moment, and overcomes the problem of filing materials under the wrong headings. Our users tend to like shallow navigation, and I'm convinced that I do, too.

Saturday, November 3, 2007

Give Me Your Huddled Masses...

When I do usability work, it's astonishing how often a client won't give me access to their user base. They're so used to protecting that base that they can't immediately swing their thinking around to working with that base, rather than just wining and dining them. Sales personnel are the most protective, in my experience. They seem to live in terror that somebody will screw up the precarious relationship they have with their customers. Even when I get to customers, I'm usually given only a small number of very carefully selected and primed representatives, and sometimes not the right people within the customer organizations.

The excuses are legion - customers are too busy, they're too far away, the right people aren't available. But perhaps the most ironic is the excuse that I'll raise customer expectations by letting them think that their wishes will become features in the next release, and if they aren't then the customers will get crabby and disappointed. A lot of salespeople leave me with the impression that their customers are generally vocal and upset about something, interspersed with short interludes of grumpy acceptance. The salesperson doesn't want that sleeping dog disturbed by the slightest zephyr.

But what they can't seem to get straight is that I'm not after user wishes; I'm after user processes. I'm not going to talk much about desires, but about business. I'm trying to understand how customers operate. They'll tell me their desires, of course, but I'm always careful to be noncommittal, with comments like "I'll see what I can do about that in the design, but it may have to be in version two".

Letting users into the planning and design stages is good business, and establishes a level of trust that can't be won any other way. Now if I can just make clients see that...

Friday, October 26, 2007

Another Hideous Example

Perhaps the most fun entries to write are about awful sites, and here's a real winner, courtesy of BoingBoing. The site is by the British government, and it's trying to discourage knife violence. But the site has two major failings: it's all in Flash, and it has enough stupid lawyerly language to discourage a Supreme Court justice from reading the site. The BoingBoing article is here. The awful site is here.

Monday, October 15, 2007

Microsoft Is After Your Brain

New Scientist is reporting a Microsoft patent application for monitoring a test subject with an EEG during usability testing, so that testers can see how people really react, instead of relying on second-hand information like behaviors or think-aloud protocols. Apparently Microsoft believes it has a way to separate out the dozens of noisy signals from the ones they believe relate to user responses. Are all the rest of us out of work now?

Where are the UCD'ers for the Military?

An article in Slate points out that Iraqi jihadists are every bit as clever about creating IEDs as the Viet Cong ever were, but with vastly updated technological help. To counter the problem, the military has come up with seemingly workable ideas, such as a drone lead vehicle in a convoy that could be driven from the back, but which makes the virtual drivers carsick, door armor so heavy it can't be moved, and images so detailed that the human eye despairs of picking out field from background. The article says quite plainly

The enemy's simple technology suits human limits; our complex technology defies them. Our crazy menu of jammers confused our troops, making them think they were jamming the right frequencies when they weren't. Our tutorials in wave propagation flummoxed them. When the $800,000 IED neutralizer flunked real-world tests, the company that built it blamed operator error, denying that the machine was "a failure in any way." But if humans can't operate your machine, your machine is a failure.

If UCD'ers are getting so commonly accepted, why isn't the military letting us design and test their wonderful ideas? Most of our best people could tell them outright that these ideas won't work, because they've been tried elsewhere.

Wednesday, October 10, 2007

Vocal Joystick

Researchers at the University of Washington have built a "joystick" that works entirely with basic vocal sounds. Vowel sounds move the cursor in various directions, while consonants like "k" and "ch" simulate clicks and releasing the mouse button. Saying the sounds louder moves the cursor faster.

Existing joysticks for the disabled require a stick in the mouth, which is tiring and interferes with speech, or head or eye tracking, which is hard to do properly. The inventors say that they elected not to use full voice recognition because it was far less efficient. That makes sense to me - syllables are much shorter and universal than whole words or phrases, and they can't be misinterpreted as easily.

Tuesday, October 9, 2007

File Sharing and Offshoring

I see double standards everywhere, I guess. It's not so much a moral failing as a human condition. Take file sharing and offshoring. Proponents of free file sharing like Slashdot and BoingBoing speak for a huge number of users and techies who dismiss the entertainment industry's hissy fits over the practice with replies like "Get used to it", "Globalization has made your old business model obsolete", and "File sharing opens up the market with more diversity".

My personal take on file sharing is that is indeed a new game that threatens the business model of the choke point that entertainment companies have profited from for more than a century. It's never safe to dismiss the creative energies of millions of users who want to circumvent the old restrictions on their pleasure. Getting around "The Man" is also more a human condition than a moral failing. The simple fact for the entertainment moguls is that file sharing exists and can't be effectively stopped, so they will indeed have to learn how to live with it. Further, file sharing has begun to live up to its potential as a redistributor of talent, a true exercise in globally democratic artistry.

But then there's B side of the technological album - offshoring. The same globalization and foreign talent development that opened up music and movies has also enabled engineers and developers in Eastern Europe, Asia, and other places to cater to the markets of America. I've heard the anecdotes about how foreign code is often bug-ridden and flaky, but so is much of American code. Further, the quality of the code continues to rise as foreign programmers become university-trained. Only a fraction of the toys from China are lead-coated, and only a fraction of the code from India is trash. We can't confuse media hype with reality.

I have to admit to being conflicted about offshoring. As jobs drift away from Americans, even in small percentages, the net effect is to make talented high schools pause at the door to computer science and engineering. Both the now-historical dot-bomb and current tales of offshoring have combined to devastate computer-related programs in higher education all over the country. We're losing a generation. Offshoring itself doesn't scare me, but its reputation does.

On the other hand, the egalitarian meritocrat within me can't help but marvel at what the other nations on Earth can accomplish with some money and markets. A recent review of a Korean Kia model, for example, compared it favorably with a Lexus, and for half the price. India's Bollywood is now producing movies that can compete in quality with many indies in America. "Dil Se", a Bollywood film that broke into the UK top ten, features an energetic crowd dance on the top of a moving train. Why not put more money and projects into these people's hands? If technology is an unstoppable force for globalization, so is commerce. If we can't stop offshoring because it works, we have to change our business model, don't we? We need to stop bemoaning it and get on with doing whatever we find we can do best.

Monday, October 8, 2007

Another Example of Lousy Design




This curtain control
is said to be an example of truly bad design. As the cutline says:

There is no natural mapping between the buttons and their functions. I went through quite a bit of trial & error before figuring it out. And the problem is that even once you figure it out, it's not very logical.
But one commentator responded that
From the printing around the buttons,. it looks pretty clear... The right column is for the solid curtains and the left column for the sheer curtains. The top button in either column opens the curtains, the middle button stops them while opening or closing, and the bottom button closes them.
This seems reasonable only after you've studied the panel for a while. In low light, in poor position, or if an old person is looking at it, the design is still bad, in my opinion. Most Americans, at least, tend to think of controls being arranged in vertical clusters, not columns. Maybe they could use some icons to improve effectiveness, or have two controls. Even better is to just use the old-style rods that you pull to open or close drapes. One big plus to the rod approach is that the visually impaired can figure it out, while the electric control doesn't even have a braille equivalent for its labels. Why overcomplicate things?

Sunday, September 23, 2007

Did My Grandparents' Brains Ever Explode?

I live in a historic neighborhood, about three blocks from where my paternal grandparents set up housekeeping in the very early years of the twentieth century, certainly before the 1920s and quite likely before WWI, which my grandfather did not participate in. I sometimes try to imagine what life was like during that period. We like to think of today as the epitome of rapid progress, but I think it's nothing next to what they experienced. Today, semiconductor technology has made only tiny incremental advances since the breakthroughs of the transistor and the integrated circuit back in the 50s, 60s, and 70s. But consider what my grandparents lived through -

The telephone. During the period before my father's birth (1928), the telephone went from being an oddity in hotels and banks to everyday objects in homes.

The car. When they moved to our town, my grandparents could have seen horses still pulling wagonloads of coal and ice around town. By the time my father was born, horses had vanished from the streets, and internal combustion engines ruled.

Central heating. Most of the homes around here still have coal scuttles or blocked-off holes where the scuttles were. By the 1940s, coal was a dead business, supplanted by gas heat in a central furnace.

Radio. By 1930, most of the homes hereabouts were within hearing distance of a radio, and by the late 1930s every home had at least one.

Television. They appeared in abundance in the 1950s. My grandparents were only just thinking about their retirements.

Air conditioning. They never had any. Neither did most of the homes until recently, perhaps the 1970s. Many still don't. But they saw it arrive in homes.

Aircraft. They lived from the beginning of aviation until the Jet Age. Although neither ever traveled on a jet, they could watch the planes pass overhead.

Their world both shrank and expanded at an astonishing pace. When they moved here, most transportation was horse (expensive), on foot (limited), or by trolley (good for long distances). Most people lived near their jobs and walked to work. Communication was by word of mouth, or, rarely and expensively, by telegram. Perhaps by newspaper for larger news. By the time they died, they could have phoned any place in the world, heard war news from Europe, watched live broadcasts from Africa, flown to Greenland. My grandfather's jobs changed accordingly. The plant where he worked started out making farm implements and ended up making trucks. The coal and ice supply where he worked part-time shut down finally, when demand shriveled away to nothing.

Yet they never seemed to be overawed by what happened around them. Today we're terribly self-conscious about our new things, but they seemed to take them in stride. When they answered the phone, they never put the receiver down reverently. The refrigerator, gas stove, and furnace were just ordinary devices by the time I knew them in the 1960s and 1970s. I suspect my grandmother, had I asked, would have expressed joy that she no longer had to kindle a coal fire first thing in the morning, but she never brought up the subject. They saw revolutions over and over again, yet never seemed stunned by any of it. Perhaps it was a lack of self-awareness, but it may also have been simple acceptance of it all, much as they seemed to accept all the deaths in the family (two out of four children dead in early adulthood) and the antics of the grandchildren. Come what may, they did their best without any public notices about it. Maybe in their world, revolutions didn't need to be understood, just tolerated, like everything else.

Wednesday, September 19, 2007

Bookmarks as IA

I once did a project for a client that involved talking with users about their browser bookmarks. The project was a redesign of an intranet that was built like a Wild West town, with a wacky combination of independent little plots stitched together only by virtue of being under the same corporate umbrella and having links on the central page of the intranet. Every department had its own navigation and design. It turned out that employees coped by using bookmarks to provide dependable paths back to the information they had so painstakingly located. Nothing new in that, of course; Web users still use that strategy. But interestingly, the names they gave to the bookmarked pages in the bookmarks were indicative of their own quirky needs. In effect, the bookmarks were individualized navigation schemes, or IAs. By studying the bookmarks, we got a pretty fair idea of users' mental models for information.

Whenever users create informational structures, it's worth studying them to discover what's core, and what's transitory. That's the problem with today's tag or link clouds: they can't distinguish between fad and eternity. Any given cloud today may have "Britney" as its biggest member, but that probably won't be the case next year. Clouds are intrinsically time-bounded. But it would be interesting to do some multivariate work like cluster analysis on several clouds over time to see what drops out and what stays.

Wednesday, September 12, 2007

Ice Cream by the Blues

Here's an ice cream machine that dispenses an amount based on how unhappy the customer is perceived to be. The vending machine does a voice analysis to determine your level of the blues. Some days I'd qualify for a whole week's worth of ice cream production.

Friday, September 7, 2007

What Have I Forgotten?

I'm teaching a graduate course in HCI, and I've learned one thing so far - how awfully much I've forgotten already. We're getting into patterns and pattern languages, and I'm alarmed to say that although I have a vague recollection of this subject, I haven't been able to use them in the field much, so they've receded into my mental archives. The same is true of other things. Names for rarely-used prototype techniques. Details about the types of conceptual models. The difference between categorization and classification. I know this kind of forgetting happens, but it's not supposed to happen to me.

Still, teaching these courses keeps the information fresher, and that's comforting to me. It's one reason why I keep teaching, even though sometimes the cognitive loading of work, home, and two or three simultaneous classes can get stressful.

Why Am I Not a Programmer?

I was recently in a meeting where I was enumerating what I did well and what I didn't. Although I have a lot of skills, there are some things I just don't do well. In user experience terms, those are primarily programming and graphical design. I can do both, but haltingly, and others do them much better, so I have a tendency to avoid them. At least with graphical design I tend to compare my meager skills to those I see on display on the finest pages, so perhaps my bar there is unreasonably high. I've never had much coursework in graphical design, at least in the past two decades, so I have little basis for comparison.

I have taken courses recently in programming, though, so I think I'm more realistic there. Java drove me crazy. I'd be looking for a method in the documentation so I could find its arguments, and to my annoyance I'd have to trace it up the tree several classes, because it was inherited a dozen times. The classes weren't hard, just tedious, frustrating, and boring. I've had the same problem in other programming classes. I was probably the only English major in history to sign up for assembly language programming class. It went OK, but I didn't feel any affinity for the subject. No spark. No gift. I concluded I would never be much good at it, and that was that.

But after my meeting, I started to reconsider my position, especially after I talked with a programmer and looked up some "how to think like a programmer" pages. To my amazement, it appears programmers don't like to code much more than I do. It's just that to solve their problems, which they love to do, they have to code.

It reminds me of my mathophobic days. Although I teach statistics now, I was once afflicted with math anxiety. That cracked away after I abruptly realized what math is. It's a modeling language, with enormous lossless compression. You can model reality with it, sort of. Once you learn the language, the rest is just fiddling with it until any given equation makes sense. Even mathematicians diddle and doodle. There are no born mathematicians, only those who have messed around with it for a long time until it's easy for them to manipulate the linguistic symbols. There may be a math aptitude, but no math gene. I could do it too.

Maybe programming is like that and I'm being too critical with myself. I have yet to meet anyone who enjoyed programming courses and having to memorize languages, any more than I've met mathematicians who liked algebra classes. The essence of math is modeling; the essence of programming is problem-solving. Indeed, a lot of programmer-pundits say that formal college training in programming is counterproductive, because it confuses syntax with thinking. Many advocate starting programming careers by learning calculus, linear algebra, or even physics, just to get into the swing of thinking through complicated problems. Maybe I don't want to earn my living inside a compiler, but maybe too I'm too hard on myself when I sneer at my programming skills. Maybe I'm not much worse than anybody else.

Tuesday, September 4, 2007

Boeing Turns to Psychology to Design 787

The September issue of Air and Space Magazine has an article about how Boeing designed the interior of the new 787 using psychologists, focus groups, and other user-centered design techniques. They hired renowned marketing expert Clotaire Rapaille to help. Apparently together they did a load of research about how fliers like to see aircraft interiors. And they're not publishing what they learned, either. Usability as trade secret.

Friday, August 31, 2007

New Data Visualizations

One of my enduring interests is data visualization. It hits so many user experience hot buttons: cognition, potential for confusion, Gestalt principles, and so forth. Research in the past few years seems to have slowed considerably in this area, perhaps because much of the breakthrough work has already been done. We're not seeing new methods of visualization now, but refinements of old ones. Fisheye views (PDF) have been around for a very long time now. So have heat maps, tree maps, network maps, and so on. They're just getting new treatments and makeovers. If you want to see how a lot of them have been retooled with modern computing power and pretty colors, check out this article in the online zine Smashing.

Most of the applications are intriguing and professionally done, but I'm not seeing anything that makes me sit forward in my chair. Many old standbys have been dusted off, like the radar chart, but everything here has been done elsewhere. I don't suppose there are many more visualization methods to be discovered. But the flip side of this is that these techniques are getting more common and less expensive, and therefore more accessible to us. I've wanted to do tree maps forever, but no client has ever warmed to the idea. If you want to see how the principle can be applied well, if a bit understated, look at the daily stock market data here.

Wednesday, August 22, 2007

Control is Everything

Scott Adams got me thinking. In his blog entry for August 17, he mentions that one of our strongest needs it to feel like we're in control. He used an old example: A genie offers you two choices. In the first choice, "You can eat at the finest restaurants in the world for free, twice a week. The only catch is that the genie picks the day, when you are not already booked, and he picks the specific restaurant." In the second choice, "You can eat at “good” restaurants, again for free, twice a week. But this time you can schedule it whenever you want, up to two places per week, and pick whatever “good” restaurant you want."

He goes on to develop the theme that the first choice probably wouldn't make many people happy, because they would eventually feel the keen sense of loss of control. The second choice, while gastronomically less appealing, is probably a better one for most of us.

It reminded me that one thing users dearly love is control, or at least the illusion of it. This is something that subconsciously irks me about lots of software and websites, I think. It's why I'm irritated with Flash so often. It just takes off and does things without asking me. The same thing annoys me about flashing ads, shifting menus, and other things that don't help me do things, but invade my locus of control. We humans don't seem to resent losing control if we don't expect it. We accept that the good guy may die at the end of the movie, but we'll shriek in fury if we can't change the channel to another movie. And we accept a loss of control when it benefits us. My car's engine does hundreds of things that I don't need to approve as they're happening. But there are some places where humans just won't accept interference. I wouldn't pay less for a car if it decided by itself when it would start. The same thing is true for software and websites, I think.

Decluttering

A group of scientists at MIT headed by Ruth Rosenholtz, a long-time researcher into vision and technology, has developed a prototype application in MATLAB that determines the amount of clutter on-screen (Link). The HCI profession has long needed something that could separate figure from ground reliably. The program is only in prototype, but apparently it's rather promising.

The problem of figuring out what's vital few from trivial many isn't trivial itself. Nuclear facility control rooms are a case in point. Rows of lights can go from being background hum to suddenly becoming extremely important. How much do you expose to an operator, or to a website user? Hicks Law was an early attempt at measuring how much stuff was too much, but the sophistication of control schemes today needs a better way of knowing when you've overstuffed the interface.

Friday, August 17, 2007

Are We The Way of the Future?

Computerworld recently published an article listing twelve job skills that no employer can refuse. The usual suspects hopped onto the list - whatever's hot, that is. Wireless, for example. But one of the twelve was usability. That surprised me, for several reasons.

First, usability isn't a universally-needed skill, at least not at the level a specialist brings. It kicks in only when the stakes are high and failure is all too expensive. For websites, it's primarily for ecommerce and other high-end sites. And those are designed and built on the coasts, not in the flyover zone where I live. Check out monster.com, dice.com, UPA's career page, or careerbuilder.com, and you'll see what I mean. Jobs are plentiful in Massachusetts, Washington State, California, New York, New Jersey, and Virginia. There aren't many in Iowa, Montana, Arizona, Indiana, Alabama, and most of the other flyovers. So how do we qualify as owners of a "can't miss" skillset?

Saturday, July 21, 2007

HCI of Casinos and Slot Machines

This is the kind of article I love to read. It's about the human factors that go into designing slot machines and casinos. The article says that a slot machine is designed to be loud and visually appealing, especially when it pays off. The three wheels encourage the victim player to think that he's almost won when two of the wheels align, when there's no such thing as "almost". The slots are positioned just within easy walk of the tables, because table players don't like to hear them, yet the spouses of table players may well play the slots while they're waiting. It also talks about casino design in general, arranging that players can't see the outdoors, or even real outdoor lighting. No clocks, either, no way of knowing how long you've been there.

The slots are insidious in that they pay off only sporadically, which is how positive reinforcement works best. They pay off publicly, so everyone around is encouraged to keep playing. And they keep nurturing that "almost there" feeling.

Wednesday, July 18, 2007

Usability as a Path to Failure? Surely Not.

Todd Wilkins at Adaptive Path has thrown down a gauntlet to usability professionals, claiming that usability is not only overrated, but even injurious and a path to failure. He cites successful artists who didn't worry about "usability" either. He says:

So, why oh why do people in this day age still hold up “usability” as something laudable in product and service design? Praising usability is like giving me a gold star for remembering that I have to put each leg in a *different* place in my pants to put them on. (Admittedly, I *do* give my 2 year old daughter a gold star for this but then she’s 2.) Usability is not a strategy for design success. The efficiency you create in your interface will be copied almost instantaneously by your competitors. Recently, I’m even coming to believe that focusing on usability is actually a path to failure. Usability is too low level, too focused on minutia. It can’t compel people to be interested in interacting with your product or service. It can’t make you compelling or really differentiate you from other organizations. Or put another way, there’s only so far you can get by streamlining the shopping cart on your website.
Ahem.

Rarely do I see a designer get this blatant. They may think this drivel, but they don't usually voice it before a plunge into happy hour. First, usability here might seem synonymous with "make stuff easy to see". We professionals know this is not anywhere close to being true. Second, it entirely overlooks that websites aren't works of art, unless they're private, non-commercial ones. Commercial (e-commerce) sites are for making money, and every visitor who snorts in frustration and leaves is a financial failure, not a failure to make a friend. Visitors don't need to be engaged, or have fun in most cases. They need to transact. They need to do the tasks they arrived to do. Much "design" merely gets in the way of that simple goal, and ought to be cut out like a splinter under the fingernail, because it provides about as much value. A big-time website isn't an opportunity to dance the visitor about. It's to enable him to act.

Of course any successful design will be copied. But then, there are only a few designs in human experience, and they're all copied every day. Graphic designers tend to think that their designs are unique and powerful. Most often the ones that are sold this way are actually glitz with no go, at their core simply reproductions of past designs with a few cosmetic changes. There are only so many ways to arrange elements on a surface.

In my view, websites are not akin to artworks, but more like cars. First you make sure the damned thing drives properly, and then you dress it up. Not the other way around. We tolerate few physical objects in our lives that are as poorly designed as "cool" or "artistic" websites, yet we complain about the physical and work our way around the virtual. This seems asinine to me.

Thursday, July 12, 2007

In The Same Room, But Apart

It's possible that the next frontier for social technology is to connect people locally, rather than across continents. When I work at home and my wife is in an adjacent bedroom/office, it's easier for us to use IM and email that goes out to servers from Memphis to Mongolia, than to use our own little network. Laptop users in public places can't readily recognize each other in cyberspace, not even through cell phones. Bluetooth has some limited capability, but it has a short range.

The problem is everywhere. If I see somebody with a cell phone and want to talk to them, I can't look up the number or ping them, even from just yards away. I have lots of occasion to be with people in short bursts of time, such as conferences, classes, professional group meetings, and the like. These are people I see rarely, but may desperately need to contact with questions or to have them render quick decisions. I may not even know them by sight, or by full name. For example, I'm a member of the local UPA, and we put on a yearly conference for World Usability Day. It attracts speakers I don't know, members I haven't seen in a long time, sponsors, attendees, and many others I might not be expected to pick out in a crowd. Another organizer comes up to me and asks "Does Dr. Willoughby (a speaker I haven't met) still need this wireless mouse?" Beats me, and I can't just ping him to find out. What would help is to have a short-distance option in my cell phone that would operate much as my laptop does when it enters a wireless field. My laptop seeks out whatever signal it knows, and if it doesn't find one it knows, it tells me that. Otherwise, it just logs on. I'd like to see something similar in cell phones. I could haul out my phone and open a screen that lists the profiles of everyone within, say, 100 yards. You could hide your profile if you wanted, so the phone would show only "Hidden Account". But prominent people who speak at conferences usually want to be noticed, so I would scroll down until I found "Dr. Lance Willoughby", highlight him, and press the "Go" button. I'm connected to his phone.

Thursday, July 5, 2007

Keeping a Usability Portfolio

When I scan the want ads for people in the design end of usability, I often see language like "Must show portfolio". Huh? Most of us would find that requirement very hard to fulfill, no matter how long we've been in the business. It's not like we're artists in a garret, and our work endures down the centuries. It may last only days or weeks. And it may be buried in the overall design of the site. Further, we may not be the graphical designer, who will get credit for the look of the site. Perhaps more importantly, websites are inherently team affairs, largely produced by committees. After the wrangling is over, any usability person might question where his or her work might be found and pointed out. Add to this the short life spans of many design companies or design departments. Even if the company name sticks around, the personnel turn over rapidly in some places. After we've been gone for a year or so, nobody there remembers us. The lesson here is that when a site goes live, we should take screen shots and put them away on a CD somewhere so that later we can make up "portfolios". Forget, and the opportunity may slip away forever. Put it into your design process.

Wednesday, June 20, 2007

When the Least of Us Are Ignored

I was made a convert to the cause of accessibility a few years back when I attended an STC conference with a progression that dealt with handicaps. At each table was a different handicapped person. One was blind, another deaf, another with only limited use of his legs, and so forth. It sounds like a freak show, but it was shockingly enlightening. I never forgot the lessons I learned at that session, and if you ever get a chance to be taught those lessons yourself, I suggest you take it.

One big lesson I learned was that accommodating the handicapped is not necessarily a big or expensive proposition, but simply being conscious of them. Widen aisles a little. Don't use slick flooring everywhere. Give optional paths that are not demeaning. In my view, this applies to all of us in human factors.

Then I walked on the Sakai project, and was jolted again. The one person on the whole big, extended team who was thinking about accessibility for the visually handicapped was almost literally crying out in the wilderness of Ann Arbor, Michigan. Sakai's interface was a long way from being handicapped-friendly. Tests proved that screen readers couldn't use it. It's been improved, but it's still not exactly ready for screen-reader prime time. Portals are often difficult for screen readers to use. Flash, text in graphics, and scripts can be real headaches, too.

I've since been struck several times by how little attention is paid to accessibility online by any website owner. Even e-commerce sites are often impenetrable for the blind, and unnecessarily so. (This may change. The National Federation for the Blind is suing Target Corp. to make its online suit accessible, under the Americans with Disabilities Act. Very preliminary so far.)

But I also found out something else interesting -- concern about the handicapped is generally in direct proportion to how much contact a designer or marketing manager has with the handicapped. If someone in their workplace, church, or family is blind, deaf, or has physical problems, they're usually far more interested in making the blind welcome online. If they've never run across the handicapped except in movies, then they're often not just blind themselves, but dismissive.

Sunday, June 17, 2007

Eye-Tracking of Multiple Images

A user experience expert will often want to know where users’ eyes are going on a page. The early equipment for “eye tracking” or “eye gazing” was cumbersome and unpleasant for users, but I’m seeing a lot of work being done nowadays with lighter and more easily available gear. This is an example of one research report on eye-tracking I ran across recently: http://psychology.wichita.edu/surl/usabilitynews/91/eyegaze.html

Eye tracking maps are often known in the user experience trade as “heat maps”, because most of the time they’re shown as websites with superimposed patches of color that go from light blue to blazing red, depending on how long a user has stared at each spot. Current research is revealing interesting things about how people look at sites. Text almost always shows an F-shaped pattern of scan. We’ve known for a long time that visitors don’t read text online, but scan instead. This is old news. But the research reported by Usability News looked into patterns of search on pages with lots of images, and there the gaze patterns break up rapidly into individualistic styles, especially when searching for something in particular. Otherwise, when browsing visitors show much the same orderly left-right, up-and-down zigzag pattern we’d expect to see. During visual search of multiple images, the eye hops rapidly about in unpredictable patterns as the brain works in overdrive to spot patterns, just as it might in an unfamiliar room with too much furniture.

The research didn’t investigate further, but from my own experience I’d say that the hippity-hoppity effect can be neutralized with proper use of boxing, labels, heads, and other clues that let the visitor quickly narrow down choices.

Sunday, June 3, 2007

Web Analytics

One of the things I wish my colleagues knew more about was Web analytics. We've ceded this important facet of usability to the marketing folks, and we need to get a piece of it back.

Web analytics is the ability to get constant data on where users have been in your website, what they looked at, where they came from, what they bought, how long they lingered, and so forth. Much of this stuff used to be in log files, but they proved to be too feeble for prediction. Current analytics packages have much more functionality than log files ever did. For example, in WebTrends I can see what proportion of users went in one of several directions from the main page, and where they went after that, and after that. I can see what keywords they used in searches, both on the site and in search engine queries to find the site.

Web analytics has become the province of marketing departments everywhere, but often they don't know how to use the data for usability or IA purposes. We do, but most of us never look at the data, or don't know what to do with it when we do. When you can see clickpaths through your site, you can begin to optimize your navigational structure. You can gradually make the site better. If you're using a WA package now, I'd suggest you start mining it for every bit of gold you can find.

Friday, May 25, 2007

The Skills We Need as HCI'ers

When I got full-time into HCI, it wasn't with the intention of writing code or picking site colors. I like the psychology and IA stuff a lot better, and even the worst CS students are far superior programmers compared to me. I'm attracted more to the social aspects of technology usage. Few of my HCI brethren seem to come from the web design end of the business. There are programmers, sure, but they tend to be more hard-core, writing software rather than web scripts. Yet, job descriptions for usability people are often hybrids of usability and true web design. Companies don't seem to care if we can do card sorts or usability tests. They want us to program.

Flash is a popular requirement, but there are even more likely items on the corporate wish list. .Net is a favorite, ASP.net , VB.net, or C#. Java has made good inroads, too. Javascript, of course. HTML is a given, but XML is coming up fast. It's tough to know if a company is looking for an HCI pro who can program a little, or a programmer who can recognize horrible usability when he sees it.

I have to admit that at one point between gigs I began looking again at ASP.net, on the advice of Ed Sullivan, a feisty and knowledgeable guy from IUPUI who assured me that an ASP.net programmer would always have a job. It still didn't trip my trigger, but the fact that I was diverting my attention in a direction I never wanted to go is indicative of the state of HCI at the moment. Are we non-programmer HCI'ers at a disadvantage, compared with the ASP.net crowd and New Media grads? Maybe. Depends on where you are in the country, maybe. But around here, I think for sure we're having more trouble justifying our existence.

Monday, May 21, 2007

Users Cannot Prognosticate

One of the favorite usability techniques by the uninformed is to sit a prospective user down in front of an almost-finished application and ask "So...how do you like it?"

This is wrong for at least two reasons. The first is the "New Coke Mistake". Older readers will recall that the Coca-Cola Company's New Coke was released in 1985. It was a sweeter, sprightlier formulation prompted by the enormous success of the Pepsi Challenge, a nationwide taste test that legitimately proved Pepsi to be a runaway favorite of test-sippers. But Coke made a huge mistake by equating sip-tests with long-term purchase and usage. Even hardened Coke drinkers didn't like the sweeter Pepsi, can after can. They wanted their old Coke. The taste tests had panicked the suits at Coke and led to a disaster of such proportions that the whole episode is still taught in marketing classes today.

The second reason is more subtle. It results from our human inability to anticipate. We like to think we're good at anticipating things. What else is a cerebrum for? But in study after study, we humans show a remarkable lack of prognostication skills. We can't correctly predict what we'll like or dislike later, what kind of gifts we'll want down the road, or what we actually do in a crisis. We do very poorly predicting how we'll respond to emotional events. Nor can we reliably predict how we'll like something later that we've just now seen. In actual fact, our conscious mind isn't all that conscious of things, either those things generated within the skull, or from the outside. Drunks aren't aware that they're impaired, for example. Cell phone users insist their reaction times are unaffected while driving, when research plainly shows that reaction is very much diminished, turning 20-somethings into senior citizens. Marketers have repeatedly found to their chagrin that products proposed to focus group members and enthusiastically embraced at the time often fail to catch on when they're actually sold.

That's not to say that focus groups or interviews are useless. They're not, but they're best used for eliciting information about the now, not in anticipating how they will be or feel later. It's almost never worthwhile asking participants to foretell the future.

Tuesday, May 15, 2007

My Cell Phone, My Death

I've been interested for some time in the hidden cognitive costs of talking on cell phones. I've found that almost nobody believes that their driving is impaired while chatting on a mobile, but the evidence is absolutely unshakable. It's been going on for years. I happened upon an article in Chance magazine from 1997, where the authors provided very strong statistical evidence that individual crashes were closely linked to cell phone use, but it took David Strayer and the University of Utah to actually measure reaction times and prove that cell phone use effectively turned 20-somethings into senior citizens while driving. It's purely a cognitive thing; using a hands-free set doesn't make any difference. Talking with a passenger doesn't have the same effect. Strayer's research shows that using a cell phone degrades change detection.

Of almost equal interest is the refusal of most people to believe the research. We're not conscious of much that happens within us, despite the belief that we are. I find that my usability class students are uniformly indignant at the suggestion that merely talking on a cell compromises their ability to drive. The attention blindness, however, it works, that envelops us during that time is not consciously evident to us. Hence, we remain oblivious to the danger.

The 1997 Chance article by Redelmeier and Tibshirani (cited by Strayer) detailed the meticulous analysis the authors did to conclude that individual car phone users were some four times more likely to crash compared to non-users during crashes. But they also examined why, if the chances soar so dramatically, that the overall accident rate hasn't also risen dramatically, which might by itself make cell users give up the devices while driving. They found that although cell phone use would indeed boost an individual's chances for an accident, the chance of an accident at all is so low that in the aggregate cell phone use doesn't raise the population's rate much at all. Cell phone calls are generally of short duration, which further reduces the overall effect.

Friday, May 11, 2007

What's REALLY Screwing Up Usability

After a considerable period of experience, I have come to the conclusion that most websites suck more because of lousy navigation than from bad controls placement, poor color choices, small text sizes, or anything else related to layout. I've found that users will forgive and use almost any interface mistakes, if only they can find what they're looking for. Poor navigation has left more users looking as confused as goats on Astroturf than any other single cause.

Now, it should be noted that this general rule applies more obviously to information-storage sites, not so much to interactive sites. But even if the emphasis is on interactivity, knowing where to go next in a sequence of steps is only a variant on where to go next to find materials.

It's for this reason that I wish HCI programs put more emphasis on information architecture. Many HCI'ers, even those with grad degrees, can't do card sorts or affinity exercises, nor perform cluster analysis. They have no clue about taxonomies, ontologies, or thesauri. I'm finding that the ability to organize whole site logically is both science and fine art, and deserves more class time than it's getting.

Saturday, May 5, 2007

Life Poses Usability Problems

As if life didn't have enough usability problems, Second Life is apparently even worse. Over at Meta Versatility, there's a nice blog note about Second Life's usability difficulties. Linden Labs has engaged Adaptive Path to clean up the interface.

Sunday, April 29, 2007

Building Levels and Websites

Stewart Brand, the author of "How Buildings Learn" is a proponent of the layered theory of time and space. In buildings, for example, he identifies six levels: site, structure, services, skin, space plan, and stuff. The site is the basis for everything else. It's the lot, the foundation, the ground underneath. It sets limits for everything else. The structure is then the load-bearing part of the building, and sets still more limits for further construction. The skin is the outer covering, and services are the functional parts of the building, the electrical, water, heat, elevators, and so forth. Space plan is the movable walls within, and stuff is people, furniture, and other readily-moved items.

In this view, each layer has its own time cycle, and woe betide the building owner who ties them together or makes them move too quickly. The stuff can change daily, while the space plan typically changes in two or three years, or perhaps more. The skin may change only once a decade. Services longer than that, and the site almost never. He counsels strongly against tying the various levels together, such as built-in furniture, because it forces levels with inherently different time-scales to stick together, creating (in the case of built-in sofas, tables, or chairs, for example) either radical surgery on the space plan or tolerance of out-of-date furniture.

I think this structural metaphor can be applied to portals and websites, and it can help us understand why some sites always seem to give us trouble. Of course, the absolute time-scale has to be reduced for the IT world, but the relative splits in levels seems to hold.

Try these equivalents:

Site = Server hardware and underlying base code, such as portal
Structure = Information architecture
Services = App code (portlets, Javascript, etc.)
Space plan = Links, portlets, pages, actual "places" that are delineated from one another
Skin = Appearance (CSS, colors, layout)
Stuff = Content (text, news items, and so on)

From this perspective, it can be seen why CSS is such a good idea -- it isolates the skin from the space plan and the stuff, giving us distinctly different time-scales for them. But there is also a caution here, that changing a space plan too often, for example, because the stuff changes, is a mistake. And in the Web world, the skin is generally replaced more often than the space plan, but they are often replaced together, too. Is this a mistake? Is novelty so important that we have to mess with the user's comfort levels?

I may not have the right Web components identified with the right building metaphors, or the whole comparison may be futile. Look through Brand's book and tell me what you think.

Tuesday, April 24, 2007

At Last I'm a Lert

As the saying goes, "Be alert...the world needs more lerts!" I'm contributing my share.

I've discovered Google Alerts, and I'm using it to watch for some terms. Google Alerts lets you know when something is published on the Web that matches your keyword in the Alert. I, for example, am using three alerts: my name, "usability", and "information architecture".

So far the results have been mixed. I've caught a couple of references to my old books, which is exceedingly gratifying. And I've caught a few good blog entries here and there about my professional subjects. But a good many notifications are less than intriguing. Signal-to-noise definitely trending toward noise, but infinitely easier than doing tedious, full-out Google searches to stay current.

Tuesday, April 17, 2007

A REAL Man-Machine Interface

I see they're shutting down the PEAR, the Princeton Engineering Anomalies Research laboratory. After some 30 years of generating controversy and reams of odd data about telekinesis, the PEAR is closing up shop, done in by exhaustion and time. Opened in 1979, the lab explored whether humans could change physical events with just the power of mind. Turns out, as the lab concludes, they can, but only very slightly, something like 2 or 3 events out of 10,000.

Research into the paranormal isn't what it once was. The Psychophysical Research Laboratories (PRL), also located in Princeton, shut down. Duke University's Parapsychology Laboratory, perhaps the most famous of them all, spun off its parapsychology work into the now-independent Rhine Research Center. It's named after Dr. Joseph Banks Rhine who invented those famous cards while at Duke. Interest in the paranormal appears to have peaked many years ago, and now even the scientists who had high hopes for actual results may be drifting away.

But the PEAR was interesting to HCI'ers because it focused on telekinesis, controlling things with thoughts. If telekinesis could be shown, then it might be possible to change the TV channel just by projecting a thought in that direction. Sadly, the prospect now looks bleak.

Wednesday, April 4, 2007

Usability in Disaster Relief

Just as I published a self-congratulatory posting about my own article in ACM's Interactions magazine, I got to read several articles in the March issue of Communications of the ACM about the HCI of disaster relief. Fascinating stuff. You can see the TOC on the site, although you can't read the articles unless you're an ACM member.

The technical challenges of breakdowns, weather, and moving to stay safe and dry are joined by the problems of designing exceedingly hardened systems ahead of time that will function when the hammer falls. Interactivity is a big issue, of course, but so are the human frailties that kick in during disasters. Reading about the work being done makes me feel a little sad that I'm not involved in such noble work.

Sunday, April 1, 2007

Usability as Risk Management

Keep an eye out for the March edition of ACM's Interactions. I'm supposed to be in it. Pays to be humble, no?

The article deals with a mechanism I developed when I worked for a consulting company. We constantly have problems convincing our employers and clients to use usability techniques. But why force the issue? Why not let the brass decide by putting the decision in their language? That's when I started doing risk analysis of new online efforts, using usability factors. I get the business stakeholders of a new website, intranet, portal, or whatnot into a room together and spend some time quantifying how bad things could get if the initiative fails, and then out of that comes an index of risk. Using usability techniques can reduce the risk, but they're expensive. The use of techniques must be matched to a risk level, because otherwise usability won't be included in the project plan.

If you can lay your hands on a copy, let me know your opinion.

Portals Will Forever Suck for Usability

In the past few years I've spent a lot of time working with portal software. I was involved for half a year with the design of Sakai, which you may know as Oncourse CL. For those of you who want to flail me about its usability flaws, be assured that we refugees from the Tools Team, who were the usability voices, are not too happy about them either. I've since become even more deeply involved in Websphere Portal and its evil little sidekick LWCM, the web content management part of the team. In between, I've been a user of one or two other portal packages. Believe me, they all stink from a usability perspective.

First let's get together on terminology. A "portal" in the old sense is just a website with a bunch of collected links that send you elsewhere. Yahoo pioneered this interface. The newer sense of "portal" is actually a software package that specializes in showing various systems to a user all at once, and supposedly giving the user the ability to customize his own interface (although no one ever does). Portal pages can have different "windows" on a page. For example, one place on the page permits Google searching, another one gives the weather, another one shows your stock market portfolio, and yet another has corporate news from your employer's PR machine. These various "windows" are actually known as "portlets", "applets", "gadgets", or "tools". Oncourse CL is an example. There, it's "tools". You can write a tool. Anybody can. It's completely open, and that's what Sakai's creators are most happy about.

But that's Sakai's Achilles heel, too. Tools don't talk well with one another, because they're designed to be secure and happy living alone. Navigation and other usability factors vary from tool to tool, so you can never just settle down to one consistent interface.

Major portal makers, like IBM and its Websphere Portal, suffer similar problems, except in greater profusion. A typical Websphere portal page may have a dozen "portlets" on a single page, all doing something different, and all potentially with a different "skin". If nothing else, they don't always fit thematically together. They're like jet cockpits. Over there is the flaps indicator, while over here is the oil pressure indicator. Just a flock of barely connected different things. There's no flow to the page, few cues, and little uniformity. And it's designed to be just that way, as if the business and computer science communities conspired in smoke-filled rooms to stick it to both users and usability professionals. Portal proponents claim that users can overcome this madness by customizing their pages, but that's just crazy talk. Users won't do it. They suffer in silence instead. In a portal, there's almost no room for conventional user testing or interface design. There is no "interface" as such, merely pages with things that do stuff. This is great for developers and business types, because portal architecture makes it much, much easier to incorporate in one place functions as diverse as weather announcements, time reporting, and CRM applications.

If you want to see examples, check out the NCAA (www.ncaa.org/wps/portal) or IBM (www.ibm.com).

Gender-Designed OS

Now that we know there are some generalized gender differences between male and female brains, what does that mean for design? We know, for example, that women tend to see color better, and to appreciate its use more. But we also know there are functional differences. For example, women seem to keep more browser windows open, and switch between them more frequently, than men. I haven't considered this in my designs, I have to admit. Yet a high capable female consultant of my acquaintance confirms it from her experience.

My curiosity about this phenomenon went all the way into operating system design. Men are well-known for being more spatially-oriented than women, which often gives them an advantage in fields like mechanical engineering. It occurs to me that operating systems are by and large spatial metaphors. We don't just use folders, but folder locations, relative to other locations. We refer to files as being "in folders" in the OS, when in reality they are nothing of the kind. There are just pointers to sectors on hard drives. Even in this age of objects, the objects are thought of as spatial entities, like appliances with plumbing outlets. But then, the vast majority of workers in this field are male. Is it possible that the metaphors we've all come to accept are just male representations, and that there's another way?

What would an OS look like designed completely by women? I put this question to one of my students, who derisively responded "I guess you'd just make everything pink". But after some discussion she came to see my point. If females are more relationship-oriented, as many researchers now maintain, would a truly female-centric OS dispense with most of the location-heavy metaphor and emphasize file relationships? How might such an OS work? What would it look like? I'm too male to guess. Maybe it would look more like the Web. I don't know, but I'd like to find out.