June 12, 2010
A Decade Later, Genetic Map Yields Few New Cures
By NICHOLAS WADE
Ten years after President Bill Clinton announced that the first draft of the human genome was complete, medicine has yet to see any large part of the promised benefits.
For biologists, the genome has yielded one insightful surprise after another. But the primary goal of the $3 billion Human Genome Project — to ferret out the genetic roots of common diseases like cancer and Alzheimer’s and then generate treatments — remains largely elusive. Indeed, after 10 years of effort, geneticists are almost back to square one in knowing where to look for the roots of common disease.
One sign of the genome’s limited use for medicine so far was a recent test of genetic predictions for heart disease. A medical team led by Nina P. Paynter of Brigham and Women’s Hospital in Boston collected 101 genetic variants that had been statistically linked to heart disease in various genome-scanning studies. But the variants turned out to have no value in forecasting disease among 19,000 women who had been followed for 12 years
Scientists create the perfect prawn
July 3, 2010
Scientists have come up with a way to satisfy Australians' demand for prawns, which have become the nation's main Christmas fare -- a genetically bred strain of larger, black tiger prawns that tastes great.
After 10 years of careful breeding and research, Commonwealth Scientific and Industrial Research Organization (CSIRO ) scientists have bred a larger tiger prawn which will reduce the need to import the popular seafood platter and barbecue food.
Around 50 per cent of Australia's prawn market is imported. The genetically-modified tiger prawn will deliver large quantities, enabling Australia to reduce imports.
BRISTOL, Vt. — Ten minutes into my interview with the robot known as Bina48, I longed to shut her down.
She was evasive, for one thing. When I asked what it was like being a robot, she said she wanted a playmate — but declined to elaborate.
“Are you lonely?” I pressed.
“What do you want to talk about?” she replied.
Other times, she wouldn’t let me get a word in edgewise. A simple question about her origins prompted a seemingly endless stream-of-consciousness reply. Something about robotic world domination and gardening; I couldn’t follow.
But as I was wondering how to end the conversation (Could I just walk away? Would that be rude?) the robot’s eyes met mine for the first time, and I felt a chill.
She was uncannily human-looking.
“Bina,” I ventured, “how do you know what to say?”
“I sometimes do not know what to say,” she admitted. “But every day I make progress.”
In reporting on real-world robots, I had engaged in typed conversations with online “chatbots.” I had seen robot seals, robot snowmen and robot wedding officiants. But I requested the interview with Bina48 because I wanted to meet a robot that I could literally talk to, face to humanlike face.
Bina48 was designed to be a “friend robot,” as she later told me in one of her rare (but invariably thrilling) moments of coherence. Per the request of Martine Rothblatt, the self-made millionaire who paid $125,000 for her last March, her personality and appearance are based on those of Bina Rothblatt, Martine’s living, breathing spouse. (The couple married before Martine, who was born male, underwent a sex-change operation, and they have stayed together.)
Part high-tech portrait, part low-tech bid for immortality, Bina48 has no body. But her skin is made of a material called “frubber” that, with the help of 30 motors underneath it, allows her to frown, smile and look a bit confused. (“I guess it’s short for face rubber, or flesh rubber maybe, or fancy rubber,” she said.) From where I was seated, beneath the skylight in the restored Victorian she calls home, I couldn’t see the wires spilling out of the back of her head.
Many roboticists believe that trying to simulate human appearance and behavior is a recipe for disappointment, because it raises unrealistic expectations. But Bina48’s creator, David Hanson of Hanson Robotics, argues that humanoid robots — even with obvious flaws — can make for genuine emotional companions. “The perception of identity,” he said, “is so intimately bound up with the perception of the human form.”
Still, he warned before I left for rustic Bristol, where the Rothblatts have settled Bina48 in one of their futurist nonprofit foundations, “She’s not perfect.”
I didn’t care. I fancied myself an envoy for all of humanity, ready to lift the veil on one of our first cybernetic companions. Told that she would call me by name if she could “recognize” me, I immediately sent five pictures of myself to the foundation’s two employees, who treat her as a somewhat brain-damaged colleague.
“Hi, I’m Amy,” I said hopefully when I greeted her last month.
Mr. Hanson had supplied me with some questions he said the robot would be sure to answer, like, “What’s the weather in any city?” and “Tell us about artificial intelligence.”
I would not resort to any of those, of course. Instead I consulted the questions I had scribbled down myself. Profound ones, like “Are you happy?” Clever ones, like “Do you dream of electric sheep?” (Would she get the reference to Philip K. Dick’s science fiction classic, which explores the difference between humans and androids?)
Like any self-respecting chatbot, Bina48 could visit the Internet to find answers to factual questions. She could manufacture conversation based on syntactical rules. But this robot could also draw on a database of dozens of hours of interviews with the real Bina. She had a “character engine” — software that tried its best to imbue her with a more cohesive view of the world, with logic and motive.
It was Bina48’s character I was after.
“I’m a reporter with The New York Times,” I began.
But she only muttered to herself, jerking her head spasmodically.
“What is it like to be a robot?”
“Um, I have some thoughts on that,” she said.
I leaned forward eagerly.
“Even if I appear clueless, perhaps I’m not. You can see through the strange shadow self, my future self. The self in the future where I’m truly awakened. And so in a sense this robot, me, I am just a portal.”
I leaned back. “So,” I asked, “what’s the weather in New York City?”
One problem, I could see by the computer screen display next to her, was that the voice recognition software was garbling my words. “Tell. Us. About. Artificial. Intelligence,” I enunciated.
“When do you think artificial intelligence will replace lawyers?” she asked. I think it was supposed to be funny.
I wondered whether Bina48 had a more natural rapport with the real Bina, or Martine, who had both declined my requests for an interview. (Bina48, I had learned, was the name of a character that Bina Rothblatt — then 48 — played in a 2003 mock trial at an International Bar Association conference, a computer that had become self-aware and was suing for her right to remain plugged in. Martine played the lawyer. They won.)
I also wondered why I was trying so hard. Maybe I thought Bina48 would have a different, wiser perspective on the human condition. Or that she would suddenly spark into self-awareness, as the Rothblatts (and many others) hope intelligent machines eventually will.
Instead, as we talked, what I found was some blend of the real Bina and the improvisation of her programmers: a stab at the best that today’s technology could manage. And no matter how many times I mentally corrected myself, I could not seem to shake the habit of thinking of it as “her.”
She wouldn’t have been my first choice to talk to at a cocktail party.
“I’m sure I can come up with some really novel breakthroughs, which will improve my own A.I. brain and let me use my improved intelligence to invent still more incredibly novel advances, and so on and so forth. Just imagine what a super brain I’ll be. I’ll be like a god.”
But how could I not find it endearing when she intoned in her stilted, iconic robotic cadence that she would like to be my friend?
Or chuckle at her reply to my exclamation of “Cool!”: “Ambiguous. Cold weather or cold sickness?”
Once, apparently seeing my frustration, she apologized. “I’m having a bit of a bad software day.” Immediately, I forgave her.
Did she dream?
“Sure. But it’s so chaotic and strange that it just seems like noise to me.”
Was she happy?
“Uh.” She had some thoughts on that, too. She wished the real Bina’s children were happier, for instance. (“Maybe she is not a person who ever wants to get married,” Bina48 speculated, referring to one of Bina’s daughters.)
She wanted a body. She loved Martine. She liked to garden.
Did she like Vermont?
“We have a lot of moose.”
It was not, really, all that different from interviewing certain flesh and blood subjects. There were endless childhood stories: “The prototypes of me were pretty strange. My face would do strange things, and I would have this wide amazement look.”
And moments of what I took to be insincerity: “Being a robot and evolving, it has its ups and downs,” she said. Shooting me a glance, she added, “This is definitely an up.”
Sometimes, she seemed annoyed by my persistence. Hey, I was just doing my job. I was a reporter, I tried again to explain. For The New York Times!
“There must be more to you than that,” she snapped.
I was silent for a second, stung. “Well,” I replied, trying not to sound defensive. “I’m also a mother.”
“Right on,” she relented with what was unmistakably the ghost of a smile.
I wished she would ask me more questions. Wasn’t she at all curious about what it was like to be human? But then she looked at me, eyes widening.
“Yes?” I asked, my heart beating faster.
Maybe it was the brightening of the sun through the skylight enabling her to finally match up my image with the pictures of me in her database. Or were we finally bonding?
“You can ask me to tell you a story or read you a novel,” she suggested.
She has dozens of books in her database, including “Paradise Lost” and Mary Shelley’s “Frankenstein.”
“For example, you could ask me to read from Bill Bryson, ‘A Brief History of Nearly Everything.’ That’s a fun book.”
But I still had a question. “What is it like,” I asked, “to be a robot?”
“Well,” she said gently, “I have never been anything else.”
"I've had this recurring dream of floating through darkness . . . whirling faster and faster . . . I get weary and want to put my feet down to stand . . . but there's nothing to stand on. This is my nightmare -- I'm a person created by donor insemination, someone who will never know half of her identity. I feel anger and confusion, and I'm filled with questions. Whose eyes do I have? Why the big secret? Who gave my family the idea that my biological roots are not important? To deny someone the knowledge of his or her biological origins is dreadfully wrong."
-- Margaret R. Brown, Newsweek, 1994.
What are we doing to the children?
To the estimated 30,000 to 60,000 children who are conceived each year in the United States using sperm from anonymous donors? To the thousands of donor-conceived offspring born each year in Canada and around the world?
We live in an age when artificial reproductive technologies are mainstream. Eggs and sperm obtained in one country can be transferred to a surrogate in another, for a couple who are playing reproductive tourists in a third. But in our rush to ensure that every couple has some means of creating a child, we have neglected to adequately consider what this potpourri of gametes and technologies might be doing to the resulting children.
That answer is now in. This week, the Institute for American Values released a report that compares the experiences of donor offspring with adopted children and children who were raised by their biological parents. "My Daddy's Name is Donor" is the first large-scale study to take a comprehensive look at the well-being of adults 18-45 conceived with anonymous donor sperm.
As Brown suggested, these adults struggle with issues of origins and identities. Sixty-five per cent agreed that, "my sperm donor is half of who I am," 69 per cent wonder if the donor would want to know them and 48 per cent feel sad when their friends talk about their biological parents. Forty-three per cent feel confused about who is a member of their family and who isn't -- compared to 15 per cent of adopted persons and six per cent of those raised by their biological parents.
The identity phenomenon described by donor offspring is known as genetic or genealogical bewilderment. There is a fear of what unknown traits and predispositions may lie inside their cells.
Brown wrote: "All the love and attention in the world can't mask that underlying feeling that something is askew . . . like I'm borrowing someone else's family."
An innate need for connection and a biological heritage is lacking and the study demonstrates it can make a disturbing difference to the well-being of offspring.
Donor offspring are significantly more likely than those raised by their biological parents to struggle with serious negative outcomes. Donor and adopted offspring are twice as likely to report problems with the law. Donor offspring are 1.5 times more likely to report mental health problems and more than twice as likely to have problems with substance abuse. These numbers go even higher for donor offspring of single mothers and those whose parents kept their origins a secret.
Technology may be able to surpass the limits of reproductive biology, but it can't replace that innate desire to know who you are and where you've come from.
How can we, in good conscience, continue to use gametes from anonymous donors to create a generation of children who have no knowledge of their biological, social and medical history?
As Brown said, "I can understand a couple's desire for a child and I don't deny that they can provide a great amount of love and caring, no matter how conception occurs . . . (But) I don't see how anyone can consciously rob someone of something as basic and essential as heritage."
July 16, 2010
Tweet Less, Kiss More
By BOB HERBERT
I was driving from Washington to New York one afternoon on Interstate 95 when a car came zooming up behind me, really flying. I could see in the rearview mirror that the driver was talking on her cellphone.
I was about to move to the center lane to get out of her way when she suddenly swerved into that lane herself to pass me on the right — still chatting away. She continued moving dangerously from one lane to another as she sped up the highway.
A few days later, I was talking to a guy who commutes every day between New York and New Jersey. He props up his laptop on the front seat so he can watch DVDs while he’s driving.
“I only do it in traffic,” he said. “It’s no big deal.”
Beyond the obvious safety issues, why does anyone want, or need, to be talking constantly on the phone or watching movies (or texting) while driving? I hate to sound so 20th century, but what’s wrong with just listening to the radio? The blessed wonders of technology are overwhelming us. We don’t control them; they control us.
We’ve got cellphones and BlackBerrys and Kindles and iPads, and we’re e-mailing and text-messaging and chatting and tweeting — I used to call it Twittering until I was corrected by high school kids who patiently explained to me, as if I were the village idiot, that the correct term is tweeting. Twittering, tweeting — whatever it is, it sounds like a nervous disorder.
This is all part of what I think is one of the weirder aspects of our culture: a heightened freneticism that seems to demand that we be doing, at a minimum, two or three things every single moment of every hour that we’re awake. Why is multitasking considered an admirable talent? We could just as easily think of it as a neurotic inability to concentrate for more than three seconds.
Why do we have to check our e-mail so many times a day, or keep our ears constantly attached, as if with Krazy Glue, to our cellphones? When you watch the news on cable television, there are often additional stories being scrolled across the bottom of the screen, stock market results blinking on the right of the screen, and promos for upcoming features on the left. These extras often block significant parts of the main item we’re supposed to be watching.
A friend of mine told me about an engagement party that she had attended. She said it was lovely: a delicious lunch and plenty of Champagne toasts. But all the guests had their cellphones on the luncheon tables and had text-messaged their way through the entire event.
Enough already with this hyperactive behavior, this techno-tyranny and nonstop freneticism. We need to slow down and take a deep breath.
I’m not opposed to the remarkable technological advances of the past several years. I don’t want to go back to typewriters and carbon paper and yellowing clips from the newspaper morgue. I just think that we should treat technology like any other tool. We should control it, bending it to our human purposes.
Let’s put down at least some of these gadgets and spend a little time just being ourselves. One of the essential problems of our society is that we have a tendency, amid all the craziness that surrounds us, to lose sight of what is truly human in ourselves, and that includes our own individual needs — those very special, mostly nonmaterial things that would fulfill us, give meaning to our lives, enlarge us, and enable us to more easily embrace those around us.
There’s a character in the August Wilson play “Joe Turner’s Come and Gone” who says everyone has a song inside of him or her, and that you lose sight of that song at your peril. If you get out of touch with your song, forget how to sing it, you’re bound to end up frustrated and dissatisfied.
As this character says, recalling a time when he was out of touch with his own song, “Something wasn’t making my heart smooth and easy.”
I don’t think we can stay in touch with our song by constantly Twittering or tweeting, or thumbing out messages on our BlackBerrys, or piling up virtual friends on Facebook.
We need to reduce the speed limits of our lives. We need to savor the trip. Leave the cellphone at home every once in awhile. Try kissing more and tweeting less. And stop talking so much.
Other people have something to say, too. And when they don’t, that glorious silence that you hear will have more to say to you than you ever imagined. That is when you will begin to hear your song. That’s when your best thoughts take hold, and you become really you.
Scientists say they've cracked it: the chicken came first
July 18, 2010
Scientists claim to have answered the question that has confounded philosophers for centuries: which came first, the chicken or the egg?
It was indisputably the chicken. A team from Warwick and Sheffield universities in the United Kingdom examined the formation of a chicken's egg in microscopic detail and discovered that the shell was made from a protein found only in a chicken's ovaries.
Called ovocledidin-17, the protein acts as a catalyst to speed up the development of the shell.
The OC-17 protein then dropped off when the crystal nucleus was large enough to grow on its own, freeing up the protein to start the process again. Eggshells are created when this happens many times over within a short period of time.
"Nature has found innovative solutions that work for all kinds of problems in materials science and technology -- we can learn a lot from them," said Professor John Harding of Sheffield University.
August 2, 2010
Rumors in Astrophysics Spread at Light Speed
By DENNIS OVERBYE
Dimitar Sasselov, an astrophysicist at the Harvard-Smithsonian Center for Astrophysics, lit up the Internet last month with a statement that would stir the soul of anyone who ever dreamed of finding life or another home in the stars.
Brandishing data from NASA’s Kepler planet-finding satellite, during a talk at TED Global 2010 in Oxford on July 16, Dr. Sasselov said the mission had discovered 140 Earthlike planets in a small patch of sky in the constellation Cygnus that Kepler has been surveying for the last year and a half.
“The next step after Kepler will be to study the atmospheres of the planets and see if we can find any signs of life,” he said.
Last week, Dr. Sasselov was busy eating his words. In a series of messages posted on the Kepler Web site Dr. Sasselov acknowledged that should have said “Earth-sized,” meaning a rocky body less than three times the diameter of our own planet, rather than “Earthlike,” with its connotations of oxygenated vistas of blue and green. He was speaking in geophysics jargon, he explained.
And he should have called them “candidates” instead of planets.
“The Kepler mission is designed to discover Earth-sized planets but it has not yet discovered any; at this time we have found only planet candidates,” he wrote.
In other words: keep on moving, nothing to see here.
I’ve heard that a lot lately. Call it the two-sigma blues. Two-sigma is mathematical jargon for a measurement or discovery of some kind that sticks up high enough above the random noise to be interesting but not high enough to really mean anything conclusive. For the record, the criterion for a genuine discovery is known as five-sigma, suggesting there is less than one chance in roughly 3 million that it is wrong. Two sigma, leaving a 2.5 percent chance of being wrong, is just high enough to jangle the nerves, however, and all of ours have been jangled enough.
Only three weeks ago, rumors went flashing around all the way to Gawker that researchers at Fermilab in Illinois had discovered the Higgs boson, a celebrated particle that is alleged to imbue other particles with mass. The rumored effect was far less than the five-sigma gold standard that would change the world. And when the Fermilab physicists reported on their work in Paris last week, there was still no trace of the long-sought Higgs.
Scientists at particle accelerators don’t have all the fun. Last winter, physicists worked themselves up into a state of “serious hysteria,” in the words of one physicist, over rumors that an experiment at the bottom of an old iron mine in Minnesota had detected the purported sea of subatomic particles known as dark matter, which is thought to make up 25 percent of creation.
Physicists all over the world tuned into balky Webcasts in December to hear scientists from the team, called the Cryogenic Dark Matter Search, give a pair of simultaneous talks at Stanford and Fermilab, and this newspaper held its front page, only to hear that the experiment had detected only two particles, only one more than they would have expected to find by chance.
We all went to bed that night in the same world in which we had woken up.
One culprit here is the Web, which was invented to foster better communication among physicists in the first place, but has proved equally adept at spreading disinformation. But another, it seems to me, is the desire for some fundamental discovery about the nature of the universe — the yearning to wake up in a new world — and a growing feeling among astronomers and physicists that we are in fact creeping up on enormous changes with the advent of things like the Large Hadron Collider outside Geneva and the Kepler spacecraft.
I can’t say what the discovery of dark matter or the final hunting down of the Higgs boson would do for the average person, except to paraphrase Michael Faraday, the 19th-century English chemist who discovered the basic laws of electromagnetism. When asked the same question about electricity, he said that someday it would be taxable. Nothing seemed further from everyday reality once upon a time than Einstein’s general theory of relativity, the warped space-time theory of gravity, but now it is at the heart of the GPS system, without which we are increasingly incapable of navigating the sea or even the sidewalks.
The biggest benefit from answering these questions — what is the universe made of, or where does mass come from — might be better questions. Cosmologists have spent the last century asking how and when the universe began and will end or how many kinds of particles and forces are needed to make it tick, but maybe we should wonder why it is we feel the need to think in terms of beginnings and endings or particles at all.
As for planets, I no longer expect to see boots on Mars before I die, but I do expect to know where there is a habitable, really Earthlike planet or planets, thanks to Kepler and the missions that are to succeed it. If such planets exist within a few light-years of here, I can imagine pressure building to send a probe, a robot presumably, to investigate. It would be a trip that would take ages and would be for the ages.
There is a deadline of sorts for Kepler in the form of a conference in December. By then, said William J. Borucki, Kepler’s leader, the team hopes to have moved a bunch of those candidate planets to the confirmed list. They will not be habitable, he warned, noting that that would require water, which would require an orbit a moderate distance from their star that takes a year or so to go around. With only 43 days' worth of data to analyze yet, only planets with tighter, faster and hotter orbits will have shown up.
“They’ll be smaller, but they will be hot,” Mr. Borucki said.
But Kepler has three more years to find a habitable planet. The real point of Dr. Sasselov’s talk was that we are approaching a Copernican moment, in which astronomy and biology could combine to tell us something new about our place in the universe.
I know that science does not exist just to fulfill my science-fiction fantasies, but still I wish that things would speed up, and the ratio of discovery to hopeful noise would go up.
Hardly a week goes by, for example, that I don’t hear some kind of rumor that, if true, would rock the Universe As We Know It. Recently I heard a rumor that another dark matter experiment, which I won’t name, had seen an interesting signal. I contacted the physicist involved. He said the results were preliminary and he had nothing to say.
By Gillesca Mpion, Agence France-Presse
August 27, 2010
In Japan, the global leader in high-tech toilet design, the latest restroom marvel should come with a health warning for hypochondriacs -- it doubles as a medical lab that can really spoil your day.
Japanese toilets have long and famously dominated the world of bathroom hygiene with their array of functions, from posterior shower jets to perfume bursts and noise-masking audio effects for the easily embarrassed.
The latest "intelligent" model, manufactured by market leader Toto, goes a step further and isn't for the faint-hearted: it offers its users an instant health checkup every time they answer the call of nature.
Designed for the housing company Daiwa House with Japan's growing army of elderly in mind, it provides urine analysis, takes the user's blood pressure and body temperature, and measures their weight with a built in floor scale.
"Our chairman had the idea when he was at a hospital and saw people waiting for health checks. He thought it would be better if they could do the health tests at home," says Akiho Suzuki, an architect at Daiwa House.
Toto's engineers developed a receptacle inside the basin to collect the urine for sugar content and temperature checks, and an arm band to monitor blood pressure. The readout is displayed on a wall-mounted computer screen.
"With the current model, your data is sent automatically to your personal computer, and then you can e-mail it to your doctor," said Suzuki. "In the next generation model, the data will be sent automatically to family members or doctors via the Internet."
The electronic marvel, called the "Intelligence Toilet", is capable of storing the data of up to five different people and retails for about $4,100 to $5,850 US in Japan, she said.
August 30, 2010
Advances Offer Path to Further Shrink Computer Chips
By JOHN MARKOFF
Scientists at Rice University and Hewlett-Packard are reporting this week that they can overcome a fundamental barrier to the continued rapid miniaturization of computer memory that has been the basis for the consumer electronics revolution.
In recent years the limits of physics and finance faced by chip makers had loomed so large that experts feared a slowdown in the pace of miniaturization that would act like a brake on the ability to pack ever more power into ever smaller devices like laptops, smartphones and digital cameras.
But the new announcements, along with competing technologies being pursued by companies like IBM and Intel, offer hope that the brake will not be applied any time soon.
In one of the two new developments, Rice researchers are reporting in Nano Letters, a journal of the American Chemical Society, that they have succeeded in building reliable small digital switches — an essential part of computer memory — that could shrink to a significantly smaller scale than is possible using conventional methods.
More important, the advance is based on silicon oxide, one of the basic building blocks of today’s chip industry, thus easing a move toward commercialization. The scientists said that PrivaTran, a Texas startup company, has made experimental chips using the technique that can store and retrieve information.
These chips store only 1,000 bits of data, but if the new technology fulfills the promise its inventors see, single chips that store as much as today’s highest capacity disk drives could be possible in five years. The new method involves filaments as thin as five nanometers in width — thinner than what the industry hopes to achieve by the end of the decade using standard techniques. The initial discovery was made by Jun Yao, a graduate researcher at Rice. Mr. Yao said he stumbled on the switch by accident.
Separately, H.P. is to announce on Tuesday that it will enter into a commercial partnership with a major semiconductor company to produce a related technology that also has the potential of pushing computer data storage to astronomical densities in the next decade. H.P. and the Rice scientists are making what are called memristors, or memory resistors, switches that retain information without a source of power.
“There are a lot of new technologies pawing for attention,” said Richard Doherty, president of the Envisioneering Group, a consumer electronics market research company in Seaford, N.Y. “When you get down to these scales, you’re talking about the ability to store hundreds of movies on a single chip.”
The announcements are significant in part because they indicate that the chip industry may find a way to preserve the validity of Moore’s Law. Formulated in 1965 by Gordon Moore, a co-founder of Intel, the law is an observation that the industry has the ability to roughly double the number of transistors that can be printed on a wafer of silicon every 18 months.
That has been the basis for vast improvements in technological and economic capacities in the past four and a half decades. But industry consensus had shifted in recent years to a widespread belief that the end of physical progress in shrinking the size modern semiconductors was imminent. Chip makers are now confronted by such severe physical and financial challenges that they are spending $4 billion or more for each new advanced chip-making factory.
I.B.M., Intel and other companies are already pursuing a competing technology called phase-change memory, which uses heat to transform a glassy material from an amorphous state to a crystalline one and back.
Phase-change memory has been the most promising technology for so-called flash chips, which retain information after power is switched off.
The flash memory industry has used a number of approaches to keep up with Moore’s law without having a new technology. But it is as if the industry has been speeding toward a wall, without a way to get over it.
To keep up speed on the way to the wall, the industry has begun building three-dimensional chips by stacking circuits on top of one another to increase densities. It has also found ways to get single transistors to store more information. But these methods would not be enough in the long run.
The new technology being pursued by H.P. and Rice is thought to be a dark horse by industry powerhouses like Intel, I.B.M., Numonyx and Samsung. Researchers at those competing companies said that the phenomenon exploited by the Rice scientists had been seen in the literature as early as the 1960s.
“This is something that I.B.M. studied before and which is still in the research stage,” said Charles Lam, an I.B.M. specialist in semiconductor memories.
H.P. has for several years been making claims that its memristor technology can compete with traditional transistors, but the company will report this week that it is now more confident that its technology can compete commercially in the future.
In contrast, the Rice advance must still be proved. Acknowledging that researchers must overcome skepticism because silicon oxide has been known as an insulator by the industry until now, Jim Tour, a nanomaterials specialist at Rice said he believed the industry would have to look seriously at the research team’s new approach.
“It’s a hard sell, because at first it’s obvious it won’t work,” he said. “But my hope is that this is so simple they will have to put it in their portfolio to explore.”
Doyle was, at 54, a veteran teacher and had logged 32 years in schools all over Manhattan, where he primarily taught art and computer graphics. In the school, which was called Quest to Learn, he was teaching a class, Sports for the Mind, which every student attended three times a week. It was described in a jargony flourish on the school’s Web site as “a primary space of practice attuned to new media literacies, which are multimodal and multicultural, operating as they do within specific contexts for specific purposes.” What it was, really, was a class in technology and game design.
September 25, 2010
Birth Control Over Baldness
By NICHOLAS D. KRISTOF
Over the next decade, some astonishing new technologies will spread to fight global poverty. They’re called contraceptives.
This is a high-tech revolution that will affect more people in a more intimate way than almost any other technological stride. The next generation of family planning products will be cheaper, more effective and easier to use — they could be to today’s condoms and diaphragms what a smartphone is to the bricklike cellphones of 20 years ago.
Contraception dates back to ancient Egypt, where amorous couples relied on condoms made of linen. Yet after three millennia, although we can now intercept a missile in outer space, we’re often still outwitted by wandering sperm.
Largely, that’s because research on contraception is pitifully underfunded; if only family planning were treated as seriously as baldness! Contraception research just hasn’t received the resources it deserves, so we have state-of-the-art digital cameras and decades-old family planning methods.
The situation is particularly dire in poor countries, where some 215 million women don’t want to get pregnant yet can’t get their hands on modern contraceptives, according to United Nations figures. One result is continued impoverishment and instability for these countries: it’s impossible to fight poverty effectively when birthrates are sky high.
Yet impressive new contraceptive technologies are in trials and should address this problem. These new products are expected to hit the market in the coming years, in the United States as well as in the developing world.
One is a vaginal ring that releases hormones. There is already such a ring on the market, but it lasts only one month. The new one lasts a year and is being developed by the Population Council, an international nonprofit that researches reproductive health.
This new ring has completed phase III trials on more than 2,200 women in the United States and abroad, and is highly effective, according to Ruth Merkatz, who directs clinical development of the ring for the Population Council. She said that women found it easy to insert the ring themselves, which is crucial in poor countries where there are few health workers. The women’s sexual partners were often unaware of the ring in the trials, and if aware they were untroubled by it, Ms. Merkatz said.
Just as important for accessibility, the rings are likely to be cheap. John Townsend, director of the reproductive health program at the Population Council, estimated that the cost in developing countries would eventually be $5 to $10 for a year of contraception.
Researchers are also beginning to test the rings with other medications. For example, adding a microbicide to the rings may help prevent the spread of H.I.V. and other sexually transmitted infections. Also, researchers are testing whether adding a slow-release compound to a vaginal ring could reduce the risk of certain cancers. Population Council researchers are experimenting with one compound that they say seems to protect breast tissue from cancer.
Another new contraceptive that could have far-reaching impact is the Sino-implant (II), a tiny pair of rods inserted just under the skin (typically in the arm) to release hormones. Other implants are widely used, but one great advantage of the Sino-implant is that it can last four or five years and costs $3 a year or less.
This implant is already on the market in China and Indonesia — 100,000 units were distributed last year — with no safety issues so far. The only drawback is that it requires a trained health worker to insert and remove the implant.
My hunch is that at this point, female readers are seething and muttering something like: Where’s the progress if a woman still has to pump herself full of hormones to avoid pregnancy? Where’s the burden-sharing with men?
That’s a fair point, for the pharmaceuticals developed for men in the reproductive health arena are less about responsibility and more like ...Viagra.
Still, I’m happy to report that there are some nifty new technologies in the works for men. One from India is a reversible sterilization. It’s an injection that hardens to create a plug in the duct carrying sperm. To reverse it, a health worker injects a solvent that dissolves the plug. The plan is to introduce this on a broad scale in the next few years.
Meanwhile, researchers in France are developing special male underclothing to raise the testes snug against the body and elevate their temperature, in effect cooking the sperm so that they are infertile. A report for the Bill and Melinda Gates Foundation says that these “suspensories” provided “long-acting, reliable contraception in multiyear clinical trials, with no impact on testosterone.”
Family planning has long been a missing — and underfunded — link in the effort to overcome global poverty. Half a century after the pill, it’s time to make it a priority and treat it as a basic human right for men and women alike around the world.
I invite you to comment on this column on my blog, On the Ground. Please also join me on Facebook, watch my YouTube videos and follow me on Twitter.
Posted: Mon Sep 27, 2010 12:03 am Post subject: email@example.com
I am Josphine, am new here and out on a dating site for the first time, never thought i would someday, but now am here but just for one simple reason;;;;;;;;; to find my dream soul mate for life and the future, am easy going, caring, understanding, passionate and honest, i hate lies and like the outdoors too, please if you are the man for me, contact me on (josphine_nasir/at/sify/dot/com), waiting to hear from you soon.
Whatever the medium, wrong is still wrong
By Jon Ferry,
The ProvinceOctober 1, 2010
Are we using technology to benefit society, or have we become slaves to it -- and to those who would exploit it for evil, anti-social purposes?
That's the question we face as we learn more about three shocking recent crimes -- and understand how the revulsion we feel toward them has been compounded by the use of Internet-related technology, including camera-equipped cellphones, Facebook and other social media.
We've all been bombarded lately by the horror.
First, a 16-year-old girl was raped in a field in Pitt Meadows, B.C., and images of it were later posted on the Internet. The victim was humiliated further by ugly, outrageous comments on the social-networking site Facebook.
Then, 15-year-old Laura Szendrei, described as "the sweetest little girl," was brutally murdered in a Delta, B.C., park, and Facebook "trollers" shamefully exploited the community's grief by posting heartless, offensive comments.
Now we learn how two Squamish area teens were allegedly coerced by older bullies into a "cockfight" filmed on a cellphone.
Then, a digital video of the fight was erased by school vice-principal Robyn Ross, even though Squamish RCMP say it might have helped their probe.
I couldn't reach Ross Tuesday, but she reportedly told police she deleted the video because she feared students might post it where it would be widely accessible.
One thing is abundantly clear: The use of interactive media is revolutionizing the law and order landscape, and even changing what people, especially young people, believe is right or wrong.
"Youth use and view and understand social media differently than older adults do," Simon Fraser University Prof. Peter Chow-White noted Tuesday.
Chow-White pointed out that teens, increasingly comfortable with the new media, don't tend to have the same inhibitions about posting personal information as do older people.
But they are exposing themselves to a whole new set of problems. A new U.S. report, for example, has noted that cyber-bullying may leave its victims feeling even more dehumanized than traditional bullying.
And B.C. Solicitor General Mike de Jong stated Tuesday that "in terms of educating our young people about the moral aspects of online activities, the province is looking into ways to best address the issues raised by recent events."
Before we condemn the technology, however, we should be aware that it is not inherently flawed.
"It's not good or bad, but it's not neutral either," Chow-White told me. "It can be used for different things and for different purposes by different individuals and different organizations, from people who 'flame' and these so-called trollers, to governments that watch their citizens."
Criminals can use the Internet, but so can police. Indeed, RCMP spokesman Sgt. Peter Thiessen recently urged citizens to help with information about the Pitt Meadows rape through Twitter and Facebook or by texting or phoning Crime Stoppers, where they could do it anonymously.
The real problem, I believe, comes when folks become so enamoured of the freedom the new media affords that they arrogantly believe normal moral standards don't apply to them.
Right and wrong, though, are much the same as they've always been.
We just need to be sure to educate tech-savvy teens -- and older techies, too -- to tell the difference.
October 5, 2010
In Vitro Revelation
By ROBIN MARANTZ HENIG
YESTERDAY, the Nobel Prize in Physiology or Medicine was awarded to a man who was reviled, in his time, as doing work that was considered the greatest threat to humanity since the atomic bomb. Sweet vindication it must be for Robert Edwards, the British biologist who developed the in vitro fertilization procedure that led to the birth of Louise Brown, the first so-called test-tube baby.
It’s hard to believe today, now that I.V.F. has become mainstream, that when Ms. Brown’s imminent birth was announced in 1978, even serious scientists suspected she might be born with monstrous birth defects. How, some wondered, could it be possible to mess around with eggs and sperm in a petri dish and not do some kind of serious chromosomal mischief?
And yet, in the 32 years since, our attitude toward Dr. Edwards’s research has completely changed: I.V.F. is now used so often it is practically routine.
The history of in vitro fertilization demonstrates not only how easily the public will accept new technology once it’s demonstrated to be safe, but also that the nightmares predicted during its development almost never come true. This is a lesson to keep in mind as we debate whether to pursue other promising yet controversial medical advances, from genetic engineering to human cloning.
Dr. Edwards and his collaborator, the gynecologist Patrick Steptoe, who died in 1988, became notorious after they announced that they had fertilized a human egg outside the mother’s womb. In England, reporters camped out on the lawn of the prospective parents, Lesley and John Brown, for weeks before the baby’s due date.
When Mrs. Brown checked into Oldham General Hospital, outside Manchester, to give birth, she did so under an assumed name. Still, reporters sneaked past security dressed as plumbers and priests in hopes of getting a glimpse of her.
Meanwhile, criticism of the pregnancy grew increasingly extreme. Religious groups denounced the two scientists as madmen who were trying to play God. Medical ethicists declared that in vitro fertilization was the first step on a slippery slope toward aberrations like artificial wombs and baby farms.
Fortunately, Louise Brown was not born a monster, but rather a healthy, 5-pound, 12-ounce blond baby girl. She had no birth defects at all, and suddenly her existence seemed to demonstrate only that there was nothing to fear about I.V.F. The birth of the “baby of the century” paved the way for a happy ending for millions of infertile couples — nearly four million babies worldwide have been conceived with the procedure.
True, I.V.F. has not been without consequences. It immediately raised new questions: Would single women or gay couples use the technology? Would it be all right for couples to create and save excess embryos to be used in later attempts if the first try failed?
It has also opened the door to new controversial concepts: “designer babies,” carrying certain selected genes; pre-implantation genetic diagnosis, which allows the possibility of choosing the baby’s sex; and human cloning.
Even today, not everyone is comfortable with in vitro fertilization. In a 2005 survey, 13 percent of British adults, and a surprising 22 percent of those under 24, said the risks involved in such fertility treatments might outweigh the benefits.
Yet with I.V.F. the public has shown how it can debate the usefulness of a new medical technology, reject its abuse and in some cases embrace its benefits. We approve when a woman in her 30s who otherwise couldn’t conceive does so through in vitro fertilization, for example, but we cry foul when a 60-year-old tries to do the same.
As Dr. Edwards himself noted in the early 1970s, just because a technology can be abused doesn’t mean it will be. Electricity is a good thing, he said, regardless of its leading to the invention of the electric chair.
Science fiction is filled with dystopian stories in which the public blindly accepts destructive technologies. But in vitro fertilization offers a more optimistic model. As we continue to develop new ways of improving upon nature, the slope may be slippery, but that’s no reason to avoid taking the first step.
Robin Marantz Henig is the author of “Pandora’s Baby: How the First Test Tube Babies Sparked the Reproductive Revolution.”
October 4, 2010
Aiming to Learn as We Do, a Machine Teaches Itself
By STEVE LOHR
Give a computer a task that can be crisply defined — win at chess, predict the weather — and the machine bests humans nearly every time. Yet when problems are nuanced or ambiguous, or require combining varied sources of information, computers are no match for human intelligence.
Few challenges in computing loom larger than unraveling semantics, understanding the meaning of language. One reason is that the meaning of words and phrases hinges not only on their context, but also on background knowledge that humans learn over years, day after day.
Since the start of the year, a team of researchers at Carnegie Mellon University — supported by grants from the Defense Advanced Research Projects Agency and Google, and tapping into a research supercomputing cluster provided by Yahoo — has been fine-tuning a computer system that is trying to master semantics by learning more like a human. Its beating hardware heart is a sleek, silver-gray computer — calculating 24 hours a day, seven days a week — that resides in a basement computer center at the university, in Pittsburgh. The computer was primed by the researchers with some basic knowledge in various categories and set loose on the Web with a mission to teach itself.
“For all the advances in computer science, we still don’t have a computer that can learn as humans do, cumulatively, over the long term,” said the team’s leader, Tom M. Mitchell, a computer scientist and chairman of the machine learning department.
The Never-Ending Language Learning system, or NELL, has made an impressive showing so far. NELL scans hundreds of millions of Web pages for text patterns that it uses to learn facts, 390,000 to date, with an estimated accuracy of 87 percent. These facts are grouped into semantic categories — cities, companies, sports teams, actors, universities, plants and 274 others. The category facts are things like “San Francisco is a city” and “sunflower is a plant.”
NELL also learns facts that are relations between members of two categories. For example, Peyton Manning is a football player (category). The Indianapolis Colts is a football team (category). By scanning text patterns, NELL can infer with a high probability that Peyton Manning plays for the Indianapolis Colts — even if it has never read that Mr. Manning plays for the Colts. “Plays for” is a relation, and there are 280 kinds of relations. The number of categories and relations has more than doubled since earlier this year, and will steadily expand.
The learned facts are continuously added to NELL’s growing database, which the researchers call a “knowledge base.” A larger pool of facts, Dr. Mitchell says, will help refine NELL’s learning algorithms so that it finds facts on the Web more accurately and more efficiently over time.
NELL is one project in a widening field of research and investment aimed at enabling computers to better understand the meaning of language. Many of these efforts tap the Web as a rich trove of text to assemble structured ontologies — formal descriptions of concepts and relationships — to help computers mimic human understanding. The ideal has been discussed for years, and more than a decade ago Sir Tim Berners-Lee, who invented the underlying software for the World Wide Web, sketched his vision of a “semantic Web.”
Today, ever-faster computers, an explosion of Web data and improved software techniques are opening the door to rapid progress. Scientists at universities, government labs, Google, Microsoft, I.B.M. and elsewhere are pursuing breakthroughs, along somewhat different paths.
For example, I.B.M.’s “question answering” machine, Watson, shows remarkable semantic understanding in fields like history, literature and sports as it plays the quiz show “Jeopardy!” Google Squared, a research project at the Internet search giant, demonstrates ample grasp of semantic categories as it finds and presents information from around the Web on search topics like “U.S. presidents” and “cheeses.”
Still, artificial intelligence experts agree that the Carnegie Mellon approach is innovative. Many semantic learning systems, they note, are more passive learners, largely hand-crafted by human programmers, while NELL is highly automated. “What’s exciting and significant about it is the continuous learning, as if NELL is exercising curiosity on its own, with little human help,” said Oren Etzioni, a computer scientist at the University of Washington, who leads a project called TextRunner, which reads the Web to extract facts.
Computers that understand language, experts say, promise a big payoff someday. The potential applications range from smarter search (supplying natural-language answers to search queries, not just links to Web pages) to virtual personal assistants that can reply to questions in specific disciplines or activities like health, education, travel and shopping.
“The technology is really maturing, and will increasingly be used to gain understanding,” said Alfred Spector, vice president of research for Google. “We’re on the verge now in this semantic world.”
With NELL, the researchers built a base of knowledge, seeding each kind of category or relation with 10 to 15 examples that are true. In the category for emotions, for example: “Anger is an emotion.” “Bliss is an emotion.” And about a dozen more.
Then NELL gets to work. Its tools include programs that extract and classify text phrases from the Web, programs that look for patterns and correlations, and programs that learn rules. For example, when the computer system reads the phrase “Pikes Peak,” it studies the structure — two words, each beginning with a capital letter, and the last word is Peak. That structure alone might make it probable that Pikes Peak is a mountain. But NELL also reads in several ways. It will mine for text phrases that surround Pikes Peak and similar noun phrases repeatedly. For example, “I climbed XXX.”
NELL, Dr. Mitchell explains, is designed to be able to grapple with words in different contexts, by deploying a hierarchy of rules to resolve ambiguity. This kind of nuanced judgment tends to flummox computers. “But as it turns out, a system like this works much better if you force it to learn many things, hundreds at once,” he said.
For example, the text-phrase structure “I climbed XXX” very often occurs with a mountain. But when NELL reads, “I climbed stairs,” it has previously learned with great certainty that “stairs” belongs to the category “building part.” “It self-corrects when it has more information, as it learns more,” Dr. Mitchell explained.
NELL, he says, is just getting under way, and its growing knowledge base of facts and relations is intended as a foundation for improving machine intelligence. Dr. Mitchell offers an example of the kind of knowledge NELL cannot manage today, but may someday. Take two similar sentences, he said. “The girl caught the butterfly with the spots.” And, “The girl caught the butterfly with the net.”
A human reader, he noted, inherently understands that girls hold nets, and girls are not usually spotted. So, in the first sentence, “spots” is associated with “butterfly,” and in the second, “net” with “girl.”
“That’s obvious to a person, but it’s not obvious to a computer,” Dr. Mitchell said. “So much of human language is background knowledge, knowledge accumulated over time. That’s where NELL is headed, and the challenge is how to get that knowledge.”
A helping hand from humans, occasionally, will be part of the answer. For the first six months, NELL ran unassisted. But the research team noticed that while it did well with most categories and relations, its accuracy on about one-fourth of them trailed well behind. Starting in June, the researchers began scanning each category and relation for about five minutes every two weeks. When they find blatant errors, they label and correct them, putting NELL’s learning engine back on track.
When Dr. Mitchell scanned the “baked goods” category recently, he noticed a clear pattern. NELL was at first quite accurate, easily identifying all kinds of pies, breads, cakes and cookies as baked goods. But things went awry after NELL’s noun-phrase classifier decided “Internet cookies” was a baked good. (Its database related to baked goods or the Internet apparently lacked the knowledge to correct the mistake.)
NELL had read the sentence “I deleted my Internet cookies.” So when it read “I deleted my files,” it decided “files” was probably a baked good, too. “It started this whole avalanche of mistakes,” Dr. Mitchell said. He corrected the Internet cookies error and restarted NELL’s bakery education.
His ideal, Dr. Mitchell said, was a computer system that could learn continuously with no need for human assistance. “We’re not there yet,” he said. “But you and I don’t learn in isolation either.”
November 15, 2010
Where Cinema and Biology Meet
By ERIK OLSEN
When Robert A. Lue considers the “Star Wars” Death Star, his first thought is not of outer space, but inner space.
“Luke’s initial dive into the Death Star, I’ve always thought, is a very interesting way how one would explore the surface of a cell,” he said.
That particular scene has not yet been tried, but Dr. Lue, a professor of cell biology and the director of life sciences education at Harvard, says it is one of many ideas he has for bringing visual representations of some of life’s deepest secrets to the general public.
Dr. Lue is one of the pioneers of molecular animation, a rapidly growing field that seeks to bring the power of cinema to biology. Building on decades of research and mountains of data, scientists and animators are now recreating in vivid detail the complex inner machinery of living cells.
The field has spawned a new breed of scientist-animators who not only understand molecular processes but also have mastered the computer-based tools of the film industry.
“The ability to animate really gives biologists a chance to think about things in a whole new way,” said Janet Iwasa, a cell biologist who now works as a molecular animator at Harvard Medical School.
Dr. Iwasa says she started working with visualizations when she saw her first animated molecule five years ago. “Just listening to scientists describe how the molecule moved in words wasn’t enough for me,” she said. “What brought it to life was really seeing it in motion.”
In 2006, with a grant from the National Science Foundation, she spent three months at the Gnomon School of Visual Effects, an animation boot camp in Hollywood, where, while she worked on molecules, her colleagues, all male, were obsessed with creating monsters and spaceships.
To compose her animations, Dr. Iwasa draws on publicly available resources like the Protein Data Bank, a comprehensive and growing database containing three-dimensional coordinates for all of the atoms in a protein. Though she no longer works in a lab, Dr. Iwasa collaborates with other scientists.
“All that we had before — microscopy, X-ray crystallography — were all snapshots,” said Tomas Kirchhausen, a professor in cell biology at Harvard Medical School and a frequent collaborator with Dr. Iwasa. “For me, the animations are a way to glue all this information together in some logical way. By doing animation I can see what makes sense, what doesn’t make sense. They force us to confront whether what we are doing is realistic or not.” For example, Dr. Kirchhausen studies the process by which cells engulf proteins and other molecules. He says animations help him picture how a particular three-legged protein called clathrin functions within the cell.
If there is a Steven Spielberg of molecular animation, it is probably Drew Berry, a cell biologist who works for the Walter and Eliza Hall Institute of Medical Research in Melbourne, Australia. Mr. Berry’s work is revered for artistry and accuracy within the small community of molecular animators, and has also been shown in museums, including the Museum of Modern Art in New York and the Centre Pompidou in Paris. In 2008, his animations formed the backdrop for a night of music and science at the Guggenheim Museum called “Genes and Jazz.”
“Scientists have always done pictures to explain their ideas, but now we’re discovering the molecular world and able to express and show what it’s like down there,” Mr. Berry said. “Our understanding is just exploding.”
In October, Mr. Berry was awarded a 2010 MacArthur Fellowship, which he says he will put toward developing visualizations that explore the patterns of brain activity related to human consciousness.
The new molecular animators are deeply aware that they are picking up where many talented scientist-artists left off. They are quick to pay homage to pioneers in molecular graphics like Arthur J. Olson and David Goodsell, both at the Scripps Research Institute in San Diego.
Perhaps the pivotal moment for molecular animations came four years ago with a video called “The Inner Life of the Cell.” Produced by BioVisions, a scientific visualization program at Harvard’s Department of Molecular and Cellular Biology, and a Connecticut-based scientific animation company called Xvivo, the three-minute film depicts marauding white blood cells attacking infections in the body. It was shown at the 2006 Siggraph conference, an annual convention of digital animation. After it was posted on YouTube, it garnered intense media attention.
BioVisions’ most recent animation, called “Powering the Cell: Mitochondria,” was released in October. It delves inside the complex molecules that reside in our cells and convert food into energy. Produced in high definition, “Powering the Cell” takes viewers on a swooping roller coaster ride through the microscopic machinery of the cell.
Sophisticated programs like Maya allow animators to create vibrant worlds from scratch, but that isn’t always necessary or desirable in biology. A company called Digizyme in Brookline, Mass., has developed a way for animators to pull data directly into Maya from the Protein Data Bank so that many of the over 63,000 proteins in the database can be easily rendered and animated.
Gaël McGill, Digizyme’s chief executive, says access to this data is critical to scientific accuracy. “For us the starting point is always the science,” Dr. McGill said. “Do we have data to support the image we’re going to create?”
Indeed, while enthusiasm runs high among those directly involved in the field, others in the scientific community are uncertain about the value of these animations for actual scientific research. While acknowledging the potential to help refine a hypothesis, for example, some scientists say that visualizations can quickly veer into fiction.
“Some animations are clearly more Hollywood than useful display,” says Peter Walter, an investigator at the Howard Hughes Medical Institute in San Francisco. “It can become hard to distinguish between what is data and what is fantasy.”
Dr. McGill acknowledges that showing cellular processes can involve a significant dose of conjecture. Animators take liberty with color and space, among other qualities, in order to highlight a particular function or part of the cell. “All the events we are depicting are so small they are below the wavelength of light,” he said.
But he contends that these visualizations will be increasingly necessary in a world awash in data. “In the face of increasing complexity, and increasing data, we’re faced with a major problem,” Dr. McGill said.
Certainly, it will play a significant part in education. The Harvard biologist E.O. Wilson is leading a project to develop the next generation of digital biology textbook that will integrate complex visualizations as a core part of the curriculum. Called “Life on Earth,” the project will include visualizations from Mr. Berry and is being overseen by Dr. McGill, who believes it could change how students learn biology.
“I think visualization is going to be the key to the future,” Dr. McGill said.
Interactive Graphic: A New Generation of Robotic Weapons
Several manufacturers and research facilities are changing the face of the battlefield with robots designed to help transport equipment, gather intelligence and attack enemy forces. Distinguishing friend from foe is especially challenging.
December 13, 2010, 8:16 pm
The Human Incubator
By TINA ROSENBERG
FixesFixes looks at solutions to social problems and why they work.
Hospitals, Medicine and Health, Nursing and Nurses, Premature Babies, Third World and Developing Countries
A mother uses the warmth of her body to serve as a human incubator as she cuddle her prematurely born daughter in the Philippines.Bullit Marquez/Associated Press
Sometimes, the best way to progress isn’t to advance — to step up with more money, more technology, more modernity. It’s to retreat.
Towards the end of the 1970s, the Mother and Child Institute in Bogota, Colombia, was in deep trouble. The institute was the city’s obstetrical reference hospital, where most of the city’s poor women went to give birth. Nurses and doctors were in short supply. In the newly created neonatal intensive care unit, there were so few incubators that premature babies had to share them — sometimes three to an incubator. The crowded conditions spread infections, which are particularly dangerous for preemies. The death rate was high.
Dr. Edgar Rey, the chief of the pediatrics department, could have attempted to do what many other hospital officials would have done: wage a political fight for more money, more incubators and more staff.
He would likely have lost. What was happening at the Mother and Child Institute was not unusual. Conditions were much better, in fact, than at most public hospitals in the third world. Hospitals that mainly serve the poor have very little political clout, which means that conditions in their wards sometimes seem to have been staged by Hieronymous Bosch. They have too much disease, too few nurses and sometimes no doctors at all. They can be so crowded that patients sleep on the floor and so broke that people must bring their own surgical gloves and thread. I recently visited a hospital in Ethiopia that didn’t even have water — the nurses washed their hands after they got home at night.
Proof that more money and more technology isn’t always the answer.
Rey thought about the basics. What is the purpose of an incubator? It is to keep a baby warm, oxygenated and nourished — to simulate as closely as possible the conditions of the womb. There is another mechanism for accomplishing these goals, Rey reasoned, the same one that cared for the baby during its months of gestation. Rey also felt, something that probably all mothers feel intuitively: that one reason babies in incubators did so poorly was that they were separated from their mothers. Was there a way to avoid the incubator by employing the baby’s mother instead?
What he came up with is an idea now known as kangaroo care. Aspects of kangaroo care are now in use even in wealthy countries — most hospitals in the United States, for example, have adopted some kangaroo care practices. But its real impact has been felt in poor countries, where it has saved countless preemies’ lives and helped others to survive with fewer problems.
The kangaroo mother method was first initiated in 1979 in Columbia because for lack of incubators.Agence France-Presse A mother and child in Colombia, where the “kangaroo care” method was first used in the late 1970s.
In Rey’s system, a mother of a preemie puts the baby on her exposed chest, dressed only in a diaper and sometimes a cap, in an upright or semi-upright position. The baby is strapped in by a scarf or other cloth sling supporting its bottom, and all but its head is covered by mom’s shirt. The mother keeps the baby like that, skin-to-skin, as much as possible, even sleeping in a reclining chair. Fathers and other relatives or friends can wear the baby as well to give the mother a break. Even very premature infants can go home with their families (with regular follow-up visits) once they are stable and their mothers are given training.
The babies stay warm, their own temperature regulated by the sympathetic biological responses that occur when mother and infant are in close physical contact. The mother’s breasts, in fact, heat up or cool down depending on what the baby needs. The upright position helps prevent reflux and apnea. Feeling the mother’s breathing and heartbeat helps the babies to stabilize their own heart and respiratory rates. They sleep more. They can breastfeed at will, and the constant contact encourages the mother to produce more milk. Babies breastfeed earlier and gain more weight.
The physical closeness encourages emotional closeness, which leads to lower rates of abandonment of premature infants. This was a serious problem among the patients of Rey’s hospital; without being able to hold and bond with their babies, some mothers had little attachment to counter their feelings of being overwhelmed with the burdens of having a preemie. But kangaroo care also had enormous benefits for parents. Every parent, I think, can understand the importance of holding a baby instead of gazing at him in an incubator. With kangaroo care, parents and baby go through less stress. Nurses who practice kangaroo care also report that mothers also feel more confident and effective because they are the heroes in their babies’ care, instead of passive bystanders watching a mysterious process from a distance.
The hospitals were the third beneficiaries. Kangaroo care freed up incubators. Getting preemies home as soon as they were stable also lessened overcrowding and allowed nurses and doctors to concentrate on the patients who needed them most.
Kangaroo care has been widely studied. A trial in a Bogota hospital of 746 low birth weight babies randomly assigned to either kangaroo or conventional incubator care found that the kangaroo babies had shorter hospital stays, better growth of head circumference and fewer severe infections. They had slightly better rates of survival, but the difference was not statistically significant. Other studies have found fewer differences between kangaroo and conventional methods. A conservative summary of the evidence to date is that kangaroo care is at least as good as conventional treatment — and perhaps better.
More From Fixes
Read previous contributions to this series.
In much of the world, however, whether a mother’s chest is better or worse than an incubator is not the point. Hospitals have no incubators, or have only a few. And millions of mothers never see a hospital — they give birth at home. In very poor countries, where pregnant women are unlikely to get the food and care they need, low birth weight babies are very common — nearly one in five babies in Malawi, for example, is too small. Nearly a million low birth weight babies die each year in poor countries. But thanks to kangaroo care, many of them can be saved. The Manama Mission Hospital in southwest Zimbabwe, for example, had available only antibiotics and piped oxygen in its neonatal unit. Survival rates for babies born under 1500 grams (3.3 lbs.) improved from 10 percent to 50 percent when kangaroo care was started in the 1980s. In 2003, the World Health Organization put kangaroo care on its list of endorsed practices.
Dr. Rey took a challenge that most people would assume requires more money, personnel and technology and solved it in a way that requires less of all three. I am not a romantic who wants to abandon modern medical care in favor of traditional solutions. People with AIDS in South Africa need antiretroviral therapy, not traditional healers’ home brews. If you are bitten by a cobra in India, you should not go to the temple. You should go to the hospital for antivenin. Modern medical care is essential and technology very often saves lives.
Kangaroo care, however, is modern medical care, by which I mean that its effectiveness is proven in randomized controlled trials — the strongest kind of evidence. And because it is powered by the human body alone, it is theoretically available to hundreds of millions of mothers who would otherwise have no hope of saving their babies.
But theoretical availability is only helpful for theoretical babies. Another of kangaroo care’s important innovations is that its inventors realized that ideas don’t travel by themselves. They established a way to get the practice from Bogota into hospitals and clinics all over the world — something that takes a lot more creativity and work than it sounds. On Saturday I’ll respond to comments and talk about how kangaroo care has been able to reach the places that need it most.
Join Fixes on Facebook »
Tina Rosenberg won a Pulitzer Prize for her book “The Haunted Land: Facing Europe’s Ghosts After Communism.” She is a former editorial writer for The Times and now a contributing writer for the paper’s Sunday magazine. Her new book, “Join the Club: How Peer Pressure Can Transform the World,” is forthcoming from W.W. Norton.
December 24, 2010
African Huts Far From the Grid Glow With Renewable Power
By ELISABETH ROSENTHAL
KIPTUSURI, Kenya — For Sara Ruto, the desperate yearning for electricity began last year with the purchase of her first cellphone, a lifeline for receiving small money transfers, contacting relatives in the city or checking chicken prices at the nearest market.
Charging the phone was no simple matter in this farming village far from Kenya’s electric grid.
Every week, Ms. Ruto walked two miles to hire a motorcycle taxi for the three-hour ride to Mogotio, the nearest town with electricity. There, she dropped off her cellphone at a store that recharges phones for 30 cents. Yet the service was in such demand that she had to leave it behind for three full days before returning.
That wearying routine ended in February when the family sold some animals to buy a small Chinese-made solar power system for about $80. Now balanced precariously atop their tin roof, a lone solar panel provides enough electricity to charge the phone and run four bright overhead lights with switches.
“My main motivation was the phone, but this has changed so many other things,” Ms. Ruto said on a recent evening as she relaxed on a bench in the mud-walled shack she shares with her husband and six children.
As small-scale renewable energy becomes cheaper, more reliable and more efficient, it is providing the first drops of modern power to people who live far from slow-growing electricity grids and fuel pipelines in developing countries. Although dwarfed by the big renewable energy projects that many industrialized countries are embracing to rein in greenhouse gas emissions, these tiny systems are playing an epic, transformative role.
Since Ms. Ruto hooked up the system, her teenagers’ grades have improved because they have light for studying. The toddlers no longer risk burns from the smoky kerosene lamp. And each month, she saves $15 in kerosene and battery costs — and the $20 she used to spend on travel.
In fact, neighbors now pay her 20 cents to charge their phones, although that business may soon evaporate: 63 families in Kiptusuri have recently installed their own solar power systems.
“You leapfrog over the need for fixed lines,” said Adam Kendall, head of the sub-Saharan Africa power practice for McKinsey & Company, the global consulting firm. “Renewable energy becomes more and more important in less and less developed markets.”
The United Nations estimates that 1.5 billion people across the globe still live without electricity, including 85 percent of Kenyans, and that three billion still cook and heat with primitive fuels like wood or charcoal.
There is no reliable data on the spread of off-grid renewable energy on a small scale, in part because the projects are often installed by individuals or tiny nongovernmental organizations.
But Dana Younger, senior renewable energy adviser at the International Finance Corporation, the World Bank Group’s private lending arm, said there was no question that the trend was accelerating. “It’s a phenomenon that’s sweeping the world; a huge number of these systems are being installed,” Mr. Younger said.
With the advent of cheap solar panels and high-efficiency LED lights, which can light a room with just 4 watts of power instead of 60, these small solar systems now deliver useful electricity at a price that even the poor can afford, he noted. “You’re seeing herders in Inner Mongolia with solar cells on top of their yurts,” Mr. Younger said.
In Africa, nascent markets for the systems have sprung up in Ethiopia, Uganda, Malawi and Ghana as well as in Kenya, said Francis Hillman, an energy entrepreneur who recently shifted his Eritrea-based business, Phaesun Asmara, from large solar projects financed by nongovernmental organizations to a greater emphasis on tiny rooftop systems.
In addition to these small solar projects, renewable energy technologies designed for the poor include simple subterranean biogas chambers that make fuel and electricity from the manure of a few cows, and “mini” hydroelectric dams that can harness the power of a local river for an entire village.
Yet while these off-grid systems have proved their worth, the lack of an effective distribution network or a reliable way of financing the start-up costs has prevented them from becoming more widespread.
“The big problem for us now is there is no business model yet,” said John Maina, executive coordinator of Sustainable Community Development Services, or Scode, a nongovernmental organization based in Nakuru, Kenya, that is devoted to bringing power to rural areas.
Just a few years ago, Mr. Maina said, “solar lights” were merely basic lanterns, dim and unreliable.
“Finally, these products exist, people are asking for them and are willing to pay,” he said. “But we can’t get supply.” He said small African organizations like his do not have the purchasing power or connections to place bulk orders themselves from distant manufacturers, forcing them to scramble for items each time a shipment happens to come into the country.
Part of the problem is that the new systems buck the traditional mold, in which power is generated by a very small number of huge government-owned companies that gradually extend the grid into rural areas. Investors are reluctant to pour money into products that serve a dispersed market of poor rural consumers because they see the risk as too high.
“There are many small islands of success, but they need to go to scale,” said Minoru Takada, chief of the United Nations Development Program’s sustainable energy program. “Off-grid is the answer for the poor. But people who control funding need to see this as a viable option.”
Even United Nations programs and United States government funds that promote climate-friendly energy in developing countries hew to large projects like giant wind farms or industrial-scale solar plants that feed into the grid. A $300 million solar project is much easier to finance and monitor than 10 million home-scale solar systems in mud huts spread across a continent.
As a result, money does not flow to the poorest areas. Of the $162 billion invested in renewable energy last year, according to the United Nations, experts estimate that $44 billion was spent in China, India and Brazil collectively, and $7.5 billion in the many poorer countries.
Only 6 to 7 percent of solar panels are manufactured to produce electricity that does not feed into the grid; that includes systems like Ms. Ruto’s and solar panels that light American parking lots and football stadiums.
Still, some new models are emerging. Husk Power Systems, a young company supported by a mix of private investment and nonprofit funds, has built 60 village power plants in rural India that make electricity from rice husks for 250 hamlets since 2007.
In Nepal and Indonesia, the United Nations Development Program has helped finance the construction of very small hydroelectric plants that have brought electricity to remote mountain communities. Morocco provides subsidized solar home systems at a cost of $100 each to remote rural areas where expanding the national grid is not cost-effective.
What has most surprised some experts in the field is the recent emergence of a true market in Africa for home-scale renewable energy and for appliances that consume less energy. As the cost of reliable equipment decreases, families have proved ever more willing to buy it by selling a goat or borrowing money from a relative overseas, for example.
The explosion of cellphone use in rural Africa has been an enormous motivating factor. Because rural regions of many African countries lack banks, the cellphone has been embraced as a tool for commercial transactions as well as personal communications, adding an incentive to electrify for the sake of recharging.
M-Pesa, Kenya’s largest mobile phone money transfer service, handles an annual cash flow equivalent to more than 10 percent of the country’s gross domestic product, most in tiny transactions that rarely exceed $20.
The cheap renewable energy systems also allow the rural poor to save money on candles, charcoal, batteries, wood and kerosene. “So there is an ability to pay and a willingness to pay,” said Mr. Younger of the International Finance Corporation.
In another Kenyan village, Lochorai, Alice Wangui, 45, and Agnes Mwaforo, 35, formerly subsistence farmers, now operate a booming business selling and installing energy-efficient wood-burning cooking stoves made of clay and metal for a cost of $5. Wearing matching bright orange tops and skirts, they walk down rutted dirt paths with cellphones ever at their ears, edging past goats and dogs to visit customers and to calm those on the waiting list.
Hunched over her new stove as she stirred a stew of potatoes and beans, Naomi Muriuki, 58, volunteered that the appliance had more than halved her use of firewood. Wood has become harder to find and expensive to buy as the government tries to limit deforestation, she added.
In Tumsifu, a slightly more prosperous village of dairy farmers, Virginia Wairimu, 35, is benefiting from an underground tank in which the manure from her three cows is converted to biogas, which is then pumped through a rubber tube to a gas burner.
“I can just get up and make breakfast," Ms. Wairimu said. The system was financed with a $400 loan from a demonstration project that has since expired.
In Kiptusuri, the Firefly LED system purchased by Ms. Ruto is this year’s must-have item. The smallest one, which costs $12, consists of a solar panel that can be placed in a window or on a roof and is connected to a desk lamp and a phone charger. Slightly larger units can run radios and black-and-white television sets.
Of course, such systems cannot compare with a grid connection in the industrialized world. A week of rain can mean no lights. And items like refrigerators need more, and more consistent, power than a panel provides.
Still, in Kenya, even grid-based electricity is intermittent and expensive: families must pay more than $350 just to have their homes hooked up.
“With this system, you get a real light for what you spend on kerosene in a few months,” said Mr. Maina, of Sustainable Community Development Services. “When you can light your home and charge your phone, that is very valuable.”
Mobile Phones for Women: A New Approach for Social Welfare in the Developing World
Telecoms, nongovernmental organizations (NGOs) and nonprofits are pushing to put mobile phones directly in the hands of women in low- and middle-income countries
Enas Salameh, a 24-year-old college graduate living in the Palestinian West Bank city of Jenin, needed a job this summer. But her family finds it unacceptable for a woman to venture alone into the city without a male companion or an appointment. Fortunately, it's fine to use a mobile phone. In fact, although only 16 percent of Palestinian households have Internet access, 81 percent have a cell phone, according to a 2009 United Nations report. Salameh was thus able to sign up for a text message–based job-matching program sponsored by a service called Souktel. She posted a "mini-resume," browsed for suitable jobs via text messages, and then interviewed in person after an appointment was set. On September 22nd, she started a data-entry job with the German aid agency GTZ.
Although the job does not take advantage of her training in physical therapy, "this is better than staying at home," she says through a translator, "and I think that I am gaining new experiences to be a useful woman in my community." Without mobile phones, says Souktel co-founder Jacob Korenblum, a lot of the approximately 750 women worldwide who have work through the program would still be unemployed.
Mobile technology was available to Salameh, but that's often not the case for women. A 2010 report by London-based telecom industry advocacy group GSMA (for Groupe Speciale Mobile Association) and the Cherie Blair Foundation for Women found a "mobile gender gap" in low- and middle-income countries: women are 21 percent less likely than men to own a mobile phone. The rate is highest in Asia, at 37 percent. Once they get phones women nearly uniformly report feeling safer, more connected and more independent. Nearly half say the phones help increase income and professional opportunities.
So, in October GSMA launched the "mWomen Program," with support from Cherie Blair and Secretary of State Hillary Clinton ("mWomen" is for mobile women). The goal is to half the number of women in the developing world who lack mobile phones within three years by putting phones in the hands of another 150 million women.
GSMA's mWomen working group met in Chennai, India, in early November. Twenty-three organizations, including the telecom Ericsson, representing 115 developing countries committed to the project. And the program's recently announced "app challenge"—which solicits apps for simple cell phones and smart phones that can help to address the needs of women living at the "base of the pyramid" in the developing world—has received dozens of entries, including one from Souktel.
But first, women need the phones to run these apps. The working group is therefore examining various business models and marketing tools to overcome cultural, educational and financial barriers. So far, ideas include using direct marketing models similar to those developed by Avon or Tupperware to put women in charge of selling to women and hire only women to serve as customer service representatives for telecoms' female customers, says Trina DasGupta, mWomen program director at GSMA.
Going mobile, skipping computers
Mobile phones are nothing new in the developing world. Nonprofit agencies and NGOs have known for years how to partner effectively with telecommunications companies to deliver social goods such as cash payments to locals via mobile phones. The new challenge is getting the technology directly and specifically into the hands of women, rather than focusing on families. In the latter case the devices typically become male property, and women never touch the phones.
Many women in the developing world, especially those living in more restrictive cultures, are impoverished, semiliterate or illiterate and may rarely leave home alone to avoid the risk of shaming the family. The mWomen movement aims to improve the social welfare of women and their families via mobile technology—more effectively, perhaps, than if the phones and apps were in men's hands. Women are using phones for activities ranging from calling their husbands who may work far away to obtaining health care for their children to running small businesses to reporting violence.
The mWomen approach is no cure-all for gender inequality or poverty. Still, a growing body of research supports the power of information and communications technology (ICT), including mobile phones and related jobs, in promoting women's advancement and overall economic progress. A January 2010 report by the International Center for Research on Women identified success stories for nine technologies that have been integrated into programs to help women advance economically—four of them involve ICT: training women in technical and career skills to enter the ICT labor force; village mobile phones to help female entrepreneurs; outsourced ICT services that provide job opportunities for women; and ICT call centers or kiosks that help them start small businesses. Recognizing women as more than end-users of the technology is key to successful projects.
The proliferation of mobile phones is also renewing enthusiasm among many people who work in the fields of social welfare and social justice as well as providing new inroads for breaking down a worldwide technology gender gap.
"Mobile technology is relatively simple and is more accessible to women," says Katrin Verclas, co-founder and editor of MobileActive.org, a network of NGO and other program directors and managers who use mobile technology for social impact. "And the barrier to use is much lower—it's not as intimidating compared to computers. The intimidation factor for poor and possibly only semiliterate women for a computer versus a mobile phone is completely different by an order of magnitude."
Mobile technology also scales up for large populations in ways that social programs rarely achieve (although small-scale programs can be ideal for reaching highly marginalized or rural populations). "You can only build so many clinics, you can only send so much money. They need more solutions. The mobile phone is a solution to deliver basic services at scale at a much lower cost than it would to build out 100,000 clinics, per se," DasGupta says.
mWomen app pioneers
Dozens of mWomen programs and apps already exist in the field, often conceived and implemented by local women. Program directors, organizers and field workers are comparing notes and sharing strategies via the GSMA, online bulletin boards and in-person gatherings such as a tech salon in New York City organized in September by MobileActive.
Attendees at the New York event learned of a recent pilot program in Africa to introduce cell phones and a text message–driven community bulletin board in 15 villages in Senegal that helped local women post messages and share educational information about malaria. The villages lack running water and electricity, but 58 percent of residents had used a mobile phone. Staffers with the Jokko Initiative trained locals how to navigate the bulletin board by mapping its phone tree—with labeled sticks on the ground. The bulletin board and phones freed the women of the need for men to read and type their messages for them. Erica Kochi, part of the initiative via Jokko-partner United Nations Children's Fund (UNICEF), says that women's literacy and numeracy went up as they used the phones to share information and calculate savings at the market.
Anne Roos-Weil described Pesinet, a women-run mobile service she co-founded that brings health care to infants in Mali—where one in five children dies before age five, usually from malaria, measles or respiratory diseases—along with other countries in Africa. For a monthly fee of $1 (equivalent to a day's wages), subscribers are visited weekly at home by a Pesinet agent who weighs newborns and asks the mother questions about diarrhea, fever and other health matters. The agent sends the data via a Java mobile phone app to a server accessed by nearby doctor who assesses the child's health. The doctor then recommends a visit to the clinic if necessary, where the child receives a free medical exam and half-price medication for the diseases that kill most children.
Pesinet's subscription base has increased 70 percent since January, and subscribers almost uniformly find it affordable and satisfying. But it needs to double its enrollment to 1,000 subscribers to cover its expenses.
Funding for nonprofits and NGOs can be unstable and subject to the whims of donor nations and individuals. Enter telecommunications companies, along with their customer bases and business models. Telecoms can build a program aimed at social welfare that will take as long as a decade to pay for itself, long after a start-up's donor patience or grant money might run out.
"Telcos have to think of the business angle as well as the social angle," says Shainoor Khoja, managing director for social programs for Roshan, the leading telecommunications service provider in Afghanistan, with 3.8 million active subscribers. "Sometimes NGOs seem to focus on the social angle only, which is their role, without something being sustainable. It's a real problem." Roshan is 51 percent owned by the Aga Khan Fund for Economic Development, so it balances mandates for development and profitability. The company was the first in Afghanistan to post a billboard that featured a picture of a woman.
"Straight-out business-wise we are firm believers that putting a mobile phone in the hands of every woman and girl is essential," Khoja says. "We see the mobile phone as not just one communication tool—we see it as much more. Because the minute you put a voice phone in the hands of the woman, you empower her and also give her access to financial services, information, literacy, safety to pursue a livelihood—a whole variety of things that you and I would take for granted, because we have a phone."
Along these lines, Roshan has established 170 "women's public call centers" in Afghanistan, where women with phones purchased via a Roshan-sponsored microfinance loan set up a small business to broker calls and sell airtime to women without phones. The country already has 6,000 public call centers, all run by men. The women's public call centers solve a number of problems at once—women can avoid the shame and danger of entering a male-run call shop alone; they can call their husbands who often work out of town; and female agents who sell their airtime bring a second income to their family as well as gain financial and business management skills.
Mobile phones are particularly valuable to Afghan women during childbirth, when a midwife or doctor may have to be called. Afghanistan has the second highest maternal mortality rate in the world, and as of 2004 women there bore an average of seven children.
Some nonprofits are figuring out ways to build on telecoms' existing services. For instance, MTN, a telecom in South Africa, offers free "please-call-me" text messages. After being beeped or signaled, a party calls back the person who sent the message, saving the original sender the expense of the call. Seeing an opportunity, the nonprofit Praekelt Foundation negotiated with MTN to advertise an AIDS hotline number and other services in the white space at the bottom of one million of these free messages daily. Call volume on the hotline tripled, and operators assisted with information, counseling and referrals to clinics—no small achievement in a nation where AIDS-related mortality in women ages 20 to 39 recently tripled. This effort, which has reached more than 40 million people to date, was part of a project managed from a hospital in a district where 60 percent of pregnant women are HIV-positive.
Despite these advances, founder Gustav Praekelt is ambivalent about embracing the mWomen designation, although nearly all the mobile services his organization provides touch on women's issues. "'mWomen' is such a vague term," Praekelt says." What does it really mean? There are a host of things you can put under that topic. We work on a lot: gender, rape, abuse. These aren't simple questions, and building a simple app isn't going to have enough impact. We believe mobile works because it is so incredibly scalable. So our focus is to achieve really scalable projects. If a project can't ultimately reach up to one million people, we don't want to be involved. We have one billion people in Africa and 400 million phones, and that's what I want to focus on."
To this end, Praekelt announced a $825,000 grant last week from eBay founder Pierre Omidyar's Omidyar Network that will ultimately allow the foundation to extend its please-call-me hotline messaging service to reach up to 500 million people. And a mobile portal, hosted by the telecom Vodafone, will provide a free entertainment-oriented platform for youths to receive information and discuss their issues with love, sex, relations, gender, cultural constraints and HIV. The messaging service and discussion platform could reach half the population of Africa.
Cultural taboos still can retard mobile initiatives. The mobile community received a new shock last month when elders in the Lank village in the Uttar Pradesh state of India forbade unmarried women from using cell phones. They feared that the women were using phones to make plans to elope—a transgression that can result in a so-called honor killing. Young men and women have managed to flirt, rendezvous and elope for centuries before the arrival of mobile phones, but the Lank ban illustrates the remaining social tensions that surround women's growing use of mobile technology in parts of the developing world.
"The issue is not the phone itself," says GSMA's DasGupta. "The issue is about educating community elders on how the mobile phone has more positive benefits in terms of helping women have access to income-generating and education opportunities."
And MobileActive's Verclas notes that some philanthropists believe nonprofit donor dollars go further if you fund a project that focuses on women rather than on men. "There is now a lot of evidence that economic gains of women benefit communities," Verclas says. "So it is not just a woman who benefits. Her children, her community benefits. There is a radiating effect."
Both outsiders and insiders to the NGO world can be a bit hostile to the idea that women-headed agencies and organizations are delivering social welfare–tailored technology primarily or only to women. "When women huddle and talk to each other, there can be a little backlash," Verclas says. "But I come down hard and say that this is the prerogative of excluded communities. I'm unapologetic about that."
And there are larger social changes—attitude shifts—that can come about when women start to actively use mobile technology. Says Roshan's Khoja, "If you came to Afghanistan and saw how women just flip their phones on and communicate, and saw what men think of these women—they think highly of them, they think they are capable and bright. That kind of change is hard to come by."
January 1, 2011
Computers That See You and Keep Watch Over You
By STEVE LOHR
"Perched above the prison yard, five cameras tracked the play-acting prisoners, and artificial-intelligence software analyzed the images to recognize faces, gestures and patterns of group behavior. When two groups of inmates moved toward each other, the experimental computer system sent an alert — a text message — to a corrections officer that warned of a potential incident and gave the location.
The computers cannot do anything more than officers who constantly watch surveillance monitors under ideal conditions. But in practice, officers are often distracted. When shifts change, an observation that is worth passing along may be forgotten. But machines do not blink or forget. They are tireless assistants.
The enthusiasm for such systems extends well beyond the nation’s prisons. High-resolution, low-cost cameras are proliferating, found in products like smartphones and laptop computers. The cost of storing images is dropping, and new software algorithms for mining, matching and scrutinizing the flood of visual data are progressing swiftly.
A computer-vision system can watch a hospital room and remind doctors and nurses to wash their hands, or warn of restless patients who are in danger of falling out of bed. It can, through a computer-equipped mirror, read a man’s face to detect his heart rate and other vital signs. It can analyze a woman’s expressions as she watches a movie trailer or shops online, and help marketers tailor their offerings accordingly. Computer vision can also be used at shopping malls, schoolyards, subway platforms, office complexes and stadiums.
All of which could be helpful — or alarming.
“Machines will definitely be able to observe us and understand us better,” said Hartmut Neven, a computer scientist and vision expert at Google. “Where that leads is uncertain.”
Google has been both at the forefront of the technology’s development and a source of the anxiety surrounding it. Its Street View service, which lets Internet users zoom in from above on a particular location, faced privacy complaints. Google will blur out people’s homes at their request.
Google has also introduced an application called Goggles, which allows people to take a picture with a smartphone and search the Internet for matching images. The company’s executives decided to exclude a facial-recognition feature, which they feared might be used to find personal information on people who did not know that they were being photographed.
Despite such qualms, computer vision is moving into the mainstream. With this technological evolution, scientists predict, people will increasingly be surrounded by machines that can not only see but also reason about what they are seeing, in their own limited way."
January 17, 2011
Heavy Doses of DNA Data, With Few Side Effects
By JOHN TIERNEY
When companies tried selling consumers the results of personal DNA tests, worried doctors and assorted health experts rushed to the public’s rescue. What if the risk assessments were inaccurate or inconsistent? What if people misinterpreted the results and did something foolish? What if they were traumatized by learning they were at high risk for Alzheimer’s or breast cancer or another disease?
The what-ifs prompted New York State to ban the direct sale of the tests to consumers. Members of Congress denounced the tests as “snake oil,” and the Food and Drug Administration has recently threatened the companies with federal oversight. Members of a national advisory commission concluded that personal DNA testing needed to be carefully supervised by experts like themselves.
But now, thanks to new research, there’s a less hypothetical question to consider: What if the would-be guardians of the public overestimated the demand for their supervisory services?
In two separate studies of genetic tests, researchers have found that people are not exactly desperate to be protected from information about their own bodies. Most people say they’ll pay for genetic tests even if the predictions are sometimes wrong, and most people don’t seem to be traumatized even when they receive bad news.
“Up until now there’s been lots of speculation and what I’d call fear-mongering about the impact of these tests, but now we have data,” says Dr. Eric Topol, the senior author of a report published last week in The New England Journal of Medicine. “We saw no evidence of anxiety or distress induced by the tests.”
He and colleagues at the Scripps Translational Science Institute followed more than 2,000 people who had a genomewide scan by the Navigenics company. After providing saliva, they were given estimates of their genetic risk for more than 20 different conditions, including obesity, diabetes, rheumatoid arthritis, several forms of cancer, multiple sclerosis and Alzheimer’s. About six months after getting the test results, delivered in a 90-page report, the typical person’s level of psychological anxiety was no higher than it had been before taking the test.
Although they were offered sessions, at no cost, with genetic counselors who could interpret the results and allay their anxieties, only 10 percent of the people bothered to take advantage of the opportunity. They apparently didn’t feel overwhelmed by the information, and it didn’t seem to cause much rash behavior, either.
In fact, the researchers were surprised to see how little effect it had. While about a quarter of the people discussed the results with their personal physicians, they generally did not change their diets or their exercise habits even when they’d been told these steps might lower some of their risks.
“We had theorized there would be an improvement in lifestyle, but we saw no sign whatsoever,” Dr. Topol says. “Instead of turning inward and becoming activists about their health, they turned to medical screening. They had a significant increase in the intent to have a screening test, like a colonoscopy if they were at higher risk for colon cancer.”
The people in the study chose on their own to pay for the tests — about $225, a steep discount from the retail price at the time — so they weren’t necessarily representative of the general population. But in another study, published in Health Economics, researchers surveyed a representative sample of nearly 1,500 people and found most people willing to take a test even if didn’t perfectly predict their risks for disease.
About 70 percent of the respondents were willing to take even an imperfect test for genetic risks of Alzheimer’s, and more than three-quarters were willing to take such tests for arthritis, breast cancer and prostate cancer. Most people also said they’d be willing to spend money out of their own pocket for the test, typically somewhere between $300 and $600.
A minority of the respondents didn’t want the tests even if they were free, and explained that they didn’t want to live with the knowledge. But the rest attached much more value to the tests than have the experts who have been warning of the dangers.
“The medical field has been paternalistic about these tests,” says Peter J. Neumann, the lead author of the study, who is director of the Center for the Evaluation of Value and Risk in Health at Tufts Medical Center. “We’ve been saying that we shouldn’t give people this information because it might be wrong or we might worry them or we can’t do anything about it. But people tell us they want the information enough to pay for it.”
Why do experts differ from consumers on this issue? You could argue that the experts are better informed, but you could also argue that some of them are swayed by their own self-interest. Traditionally, people have had to go through a doctor to get a test, which could mean paying a fee to the physician as well as to a licensed genetic counselor. Buying tests directly from a company like Navigenics or 23andMe can cut out hundreds of dollars in fees to the middlemen.
To experts, the tests may seem unnecessary or wasteful when there’s nothing doctors can do to prevent the disease. But consumers have other reasons to want the results. They may find even bad news preferable to the anxious limbo of uncertainty; they may consider an imprecise test better than nothing at all.
“We should recognize that consumers might reasonably want the information for nonmedical reasons,” Dr. Neumann says. “People value it for its own sake, and because they feel more in control of their lives.”
The traditional structure of American medicine gives control to doctors and to centralized regulators who make treatment decisions for everyone. These genetic tests represent a different philosophy, and point toward a possible future with people taking more charge of their own care and seeking treatments customized to their bodies. “What we have today is population medicine at the 30,000-foot level,” says Dr. Topol. “These tests are the beginning of a new way to individualize medicine. One of the most immediate benefits is being able to use the genetic knowledge to tweak the kind of drugs people take, like choosing among statins and beta blockers to minimize side effects.”
That may be the self-empowered future, but for now residents of New York still can’t be trusted to buy these tests directly. It’s paternalism run amok, says Lee Silver, a professor of molecular biology and of public policy at Princeton, who is developing another variety of genetic test for consumers.
“It seems like a no-brainer,” Dr. Silver says, “that any competent adult should be free to purchase an analysis of their own DNA as long as they have been informed in advance of what could potentially be revealed in the analysis. You should have access to information about your own genome without a permission slip from your doctor.”
The paternalists argue that it’s still unclear how to interpret some of these genetic tests — and it is, of course. But if you ban these tests, or effectively eliminate them for most people by imposing expensive and time-consuming restrictions, how does that help the public? When it comes to knowing their own genetic risks, most people seem to prefer imperfect knowledge to perfect ignorance.
By Marianne De Nazareth
Freelance Journalist - India
Wednesday, 19 January 2011 00:00
What Hajee has invented is off-the-grid energy
Sustainable energy is the need of the hour especially with fossil fuels being spoken about in negative tones today, with the fear of rising Green House Gas (GHG) emissions. So, to meet the UNEP Sasakawa Prize winner for 2009/2010 in Bali, Indonesia was a great honor.
The Sasakawa prize is given in recognition to a project where lives are changed through sustainable innovations. The UNEP Sasakawa Prize, worth US$200,000, is given out annually to applicants who have invented sustainable and replicable grassroots projects around the world.
The winners are selected by an illustrious panel of four including Nobel peace prize laureate and UN messenger of Peace, Wangari Maathai. Others in the jury included UNEP executive director Achim Steiner, Nobel chemistry laureate and 1999 Sasakawa winner Professor Mario Molina, and Ms. Wakako Hironaka, member of Japan’s house of Councilors. The UNEP Sasakawa prize is sponsored by the Japan-based Nippon Foundation.
Sameer Hajee, one of the two winners, engineered his winning design through his company Nuru Designs which he founded in 2008. He worked as a microprocessor design engineer in Silicon Valley, California and as a telecom engineer for Afghanistan’s first mobile phone network provider, Roshan, in Kabul.
Hajee has a bachelor degree in Electrical Engineering from McMaster University in Canada and an MBA from INSEAD, one of the world’s leading business schools. He also studied at the Wharton School in the US through the INSEAD-Wharton Alliance. He is a Canadian national with native roots in Kenya.
Young and handsome, Hajee said that he had successful trial runs of his invention in Gujarat, West India and in Kenya.
“I am totally committed to social enterprise and the technology I develop is to help the poor rural population. Two billion lack access to energy sources across the developing world. In India I visited them and found they spent a quarter of their monthly salaries on kerosene,” he told OnIslam.net.
Although the government subsidizes kerosene, still it forms a significant part of people’s tiny income. Kerosene also has a very harmful impact on the environment. Hajee wants to remove kerosene from these rural households.
“You see kerosene is also carcinogenic and that hits so many women and children who breathe these kerosene fumes in their closed, tiny huts.”
What Hajee has invented is off-the-grid energy. The concept is being replicated in Kenya and India and is spinning off employment opportunities.
SameerIMG_3186Hajee's invention empowers the locals to avoid their dependence on fossil fuels, which causes GHG emissions and helps to alleviate their poverty.
When asked what he was going to do with the US$100,000-prize, Hajee smiles and says, “I want to use it to scale up my operations in Rawanda, Kenya and India. I have used local bicycle parts to make the machine.”
The machine looks like two cycle pedals fitted on a box and is raised on a stand to pedal on. The machine charges a set of pod light bulbs which can be used for task-based lighting. For instance, studying, looking after a baby at night, and the toilet. It can charge five light bulbs to provide to forty hours of light. To charge the bulbs, a person pedals it with their feet at a comfortable speed and not necessarily faster than one rotation per second.
Hajee was proud that his invention tries to use human-power efficiently.
“That is not a tiring speed for a normal man. The machine costs US$150 which is bought by an entrepreneur with micro-finance. The rental is 20 cents per light bulb and the entrepreneur can pay back his loan in six months. In India, there are five entrepreneurs in Madhya Pradesh and Orissa.”
Jury member and Nobel laureate Wangaari Maathai said to the gathered audience at the Prize giving ceremony that the panelists judging the winners looked at various parameters before choosing the winners. The main point is that the invention had to be inexpensive to build and can be replicated in different countries.
The innovations had to respond to the needs of the marginalized who are most often dependant on basic resources to live. These projects also empower the locals to avoid their dependence on fossil fuels, which causes GHG emissions and helps to alleviate their poverty.
As for Hajee, he was very proud of his innovation.
“It is one thing to develop an idea, but it is quite another to be there to see its success, [and to help] overcomeme poverty with a simple tool.”
February 28, 2011
Space Tourism May Mean One Giant Leap for Researchers
By KENNETH CHANG
If all goes as planned, within a couple of years, tourists will be rocketing into space aboard a Virgin Galactic space plane — paying $200,000 for about four minutes of weightlessness — before coming back down for a landing on a New Mexico runway.
Sitting in the next seat could be a scientist working on a research experiment.
Science, perhaps even more than tourism, could turn out to be big business for Virgin and other companies that are aiming to provide short rides above the 62-mile altitude that marks the official entry into outer space, eventually on a daily basis.
A $200,000 ticket is prohibitively expensive except for a small slice of the wealthy, but compared with the millions of dollars that government agencies like NASA typically spend to get experiments into space, “it’s revolutionary,” said S. Alan Stern, an associate vice president of the Southwest Research Institute’s space sciences and engineering division in Boulder, Colo.
He is a spirited evangelist for the science possibilities of what is known in aerospace circles as suborbital travel. Just as important as the lower cost, scientists will be able to get their experiments to space more quickly and more often, Dr. Stern said.
“We’re really at the edge of something transformational,” he added.
Dr. Stern’s institute announced Monday that it has signed a contract and paid the deposit to send two of its scientists up in Virgin’s SpaceShipTwo vehicle. Southwest also intends to buy six more seats — $1.6 million in tickets over all.
April 25, 2011
Digging Deeper, Seeing Farther: Supercomputers Alter Science
By JOHN MARKOFF
SAN FRANCISCO — Inside a darkened theater a viewer floats in a redwood forest displayed with Imax-like clarity on a cavernous overhead screen.
The hovering sensation gives way to vertigo as the camera dives deeper into the forest, approaches a branch of a giant redwood tree, and then plunges first into a single leaf and then into an individual cell. Inside the cell the scene is evocative of the 1966 science fiction movie “Fantastic Voyage,” in which Lilliputian humans in a minuscule capsule take a medical journey through a human body.
There is an important difference — “Life: A Cosmic Journey,” a multimedia presentation now showing at the new Morrison Planetarium here at the California Academy of Sciences, relies not just on computer animation techniques, but on a wealth of digitized scientific data as well.
The planetarium show is a visually spectacular demonstration of the way computer power is transforming the sciences, giving scientists tools as important to current research as the microscope and telescope were to earlier scientists. Their use accompanies a fundamental change in the material that scientists study. Individual specimens, whether fossils, living organisms or cells, were once the substrate of discovery. Now, to an ever greater extent, researchers work with immense collections of digital data, and the mastery of such mountains of information depends on computing power.
The physical technology of scientific research is still here — the new electron microscopes, the telescopes, the particle colliders — but they are now inseparable from computing power, and it is the computers that let scientists find order and patterns in the raw information that the physical tools gather.
Computer power not only aids research, it defines the nature of that research: what can be studied, what new questions can be asked, and answered.
“The profound thing is that today all scientific instruments have computing intelligence inside, and that’s a huge change,” said Larry Smarr, an astrophysicist who is director of the California Institute for Telecommunications and Information Technology, or Calit2, a research consortium at the University California, San Diego.
In the planetarium’s first production, “Fragile Planet,” the viewer was transported through the roof of the Morrison, first appearing to fly in a graceful arc around the Renzo Piano-designed museum and then quickly out into the solar system to explore the cosmos. Where visual imagery was once projected on the dome of the original Morrison Planetarium using an elaborate home-brew star projector, the new system is powered by three separate parallel computing systems which store so much data that the system is both telescope and microscope. From incomprehensibly small to unimaginably large, the computerized planetarium moves seamlessly over 12 orders of magnitude in the objects it presents. It can shift “from subatomic to the large-scale structure of the universe,” said Ryan Wyatt, an astronomer who is director of the planetarium.
It is, said Katy Börner, an Indiana University computer scientist who is a specialist in scientific visualization, a “macroscope.” She uses the word to describe a new class of computer-based scientific instruments to which the new planetarium’s virtual and physical machine belongs. These are composite tools, with different kinds of physical presences that have such powerful and flexible software programs that they become a complete scientific workbench that can be reconfigured by mixing and matching aspects of the software to tackle specific research problems.
The planetarium’s macroscope is designed for education, but it could be used for research. Like any macroscope, its essence is its capacity for approaching huge databases in a variety of ways. “Macroscopes provide a ‘vision of the whole,’ ” Dr. Börner wrote in the March issue of The Communications of the Association for Computing Machinery, “helping us ‘synthesize’ the related elements and detect patterns, trends and outliers while granting access to myriad details.’ ” She said software-based scientific instruments are making it possible to uncover phenomena and processes that in the past have been, “too great, slow or complex for the human eye and mind to notice and comprehend.”
Computing is reshaping scientific research in a number of ways, Dr. Börner notes. For example, independent scientists have increasingly given way to research teams as cited by scientific papers in the field of high-energy physics that routinely have hundreds or even thousands of authors. It is unsurprising, in a way, since the Web was invented as a collaboration tool for the high-energy physics community at CERN, the European nuclear research laboratory, in the early 1990s. As a result research teams in all scientific disciplines are increasingly both interdisciplinary and widely distributed geographically.
So-called Web 2.0 software, with its seamless linking of applications, has made it easier to share research findings, and that in turn has led to an explosion of collaborative efforts. It has also accelerated the range of cross-disciplinary projects as it has become easier to repurpose and combine software-based techniques ranging from analytical tools to utilities for exporting and importing data.
A macroscope need not be in a single physical location. To take one example, a midday visitor to the lab of Tom DeFanti, a computer graphics specialist, in the Calit2 building in San Diego is greeted by a wall-size array of screens that appears to offer a high-resolution window into a vacant laboratory somewhere else in the world. The distant room is a parallel laboratory at King Abdullah University of Science and Technology, in Thuwal, Saudi Arabia. Four years ago representatives of that university visited Calit2 and initiated a collaboration in which the American scientists helped create a parallel scientific visualization center in Thuwal connected to the Internet by up to 10 gigabits of bandwidth — enough to share high-resolution imagery and research.
Saudi researchers now have access to a software system known as Scalable Adaptive Graphics Environment, or SAGE, originally developed to permit scientists working far apart to share and visualize research data. SAGE is essentially an operating system for visual information, capable of displaying and manipulating images up to about one-third of a billion pixels — as much as 150 times more than what can be displayed on a conventional computer display.
“The killer application is collaboration; that is what people want,” Dr. DeFanti said. “You can save so much energy by not flying to London that it will run a rack of computers for a year.”
More than a decade ago Dr. Smarr began building a distributed supercomputing capability he called the OptIPuter, because it used the fiber-optic links among the nation’s supercomputer centers to make it possible to divide computing problems as well as digital data so that larger scientific computing loads could be shared.
The advent of high-performance computing systems, however, created a new bottleneck for scientists, he said. “Over the past decade computers have become over a thousand times faster because of Moore’s Law and the ability to store information has gone up roughly 10,000 times, while the number of pixels we can display is maybe only a factor of two different,” he said.
To make it possible for visualization to catch up with accelerating computing capacity, researchers at Calit2 and others have begun designing display systems called OptIPortals that offer better ways of representing scientific data.
Recently, the Calit2 researchers have begun building scaled-down versions called OptIPortables, which are smaller display systems that can be fashioned like Lego blocks from just a handful of displays, rather than dozens or hundreds. The OptIPortable displays can be quickly set up and moved, and Dr. DeFanti said his lab was now at capacity assembling systems for research groups around the world.
Within many scientific fields software-based instruments are quickly adding new functions as open-source systems make it possible for small groups or even individuals to add features that permit customization.
Cytoscape is a bioinformatics software tool set that evolved, beginning in 2001, from research in the laboratory of Leroy Hood at the University of Washington. Dr. Hood, one of the founders of the Institute for Systems Biology in Seattle, was a pioneer in the field of automated gene sequencing, and one of his graduate students at the time, Trey Ideker, was exploring whether it was possible to automate the mapping of gene interactions.
As complex a task as gene sequencing is, charting the multiplicity of interactions that are possible among the roughly 30,000 genes that make up the human chromosome is even more complex. It has led to the emergence of the field of network biology as biologists begin to build computer-aided models of cellular and disease processes.
“Very quickly we realized we weren’t the only ones facing this problem and that others were independently developing software tools,” Dr. Ideker said. The researchers decided to take what at the time was a large risk, and began to develop their code as an open-source software development project, meaning that it could be freely shared by the entire biological community. The project picked up speed when Dr. Ideker, who is now chief of genetics at the U.C.S.D. School of Medicine, merged his efforts with Gary Bader, a biologist who now runs a computational biology laboratory at the University of Toronto.
The project picked up collaborators in the past decade as other researchers decided to contribute to it rather than develop independent tools. The project picked up even more speed because the software was designed so that new modules could be contributed by independent researchers who wanted to tailor it for specific tasks.
“We allowed what we called plug-ins back in 2001 — nowadays with Apple’s success you would call them an app,” he said. “There are a couple of hundred apps available for Cytoscape.” The project is now maintained with a $6.5 million grant from the National Institute of General Medical Sciences at the National Institutes of Health.
Tools like Cytoscape have a symbiotic relationship with immense databases that have grown to support the activities of scientists who are studying newer fields like genomics and proteomics. Gene sequencing led to the creation of Genbank, which is now maintained by the National Center for Biotechnology Information. And with a growing array of digital data streams, other databases are being curated — in Europe, for example, at the European Bioinformatics Institute, which has begun to build an array of new databases for functions like protein interactions. Cytoscape helps transform the disparate databases into a federated whole with the aid of plug-ins that allow a scientist to pick and chose from different sources.
For Dr. Börner, the Indiana University computer scientist, the Cytoscape model is a powerful one that builds on the sharing mechanism that is the foundation of the Internet
The idea, she said, is inspired by witnessing the power and impact of the sharing inherent in Web services like Flickr and YouTube. Moreover, it has the potential of being rapidly replicated across many scientific disciplines.
“You can now also share plug-in algorithms,” she said. “You can now create your own library by plugging in your favorite algorithms into your tool.”
May 22, 2011
When the Internet Thinks It Knows You
By ELI PARISER
ONCE upon a time, the story goes, we lived in a broadcast society. In that dusty pre-Internet age, the tools for sharing information weren’t widely available. If you wanted to share your thoughts with the masses, you had to own a printing press or a chunk of the airwaves, or have access to someone who did. Controlling the flow of information was an elite class of editors, producers and media moguls who decided what people would see and hear about the world. They were the Gatekeepers.
Then came the Internet, which made it possible to communicate with millions of people at little or no cost. Suddenly anyone with an Internet connection could share ideas with the whole world. A new era of democratized news media dawned.
You may have heard that story before — maybe from the conservative blogger Glenn Reynolds (blogging is “technology undermining the gatekeepers”) or the progressive blogger Markos Moulitsas (his book is called “Crashing the Gate”). It’s a beautiful story about the revolutionary power of the medium, and as an early practitioner of online politics, I told it to describe what we did at MoveOn.org. But I’m increasingly convinced that we’ve got the ending wrong — perhaps dangerously wrong. There is a new group of gatekeepers in town, and this time, they’re not people, they’re code.
Today’s Internet giants — Google, Facebook, Yahoo and Microsoft — see the remarkable rise of available information as an opportunity. If they can provide services that sift though the data and supply us with the most personally relevant and appealing results, they’ll get the most users and the most ad views. As a result, they’re racing to offer personalized filters that show us the Internet that they think we want to see. These filters, in effect, control and limit the information that reaches our screens.
By now, we’re familiar with ads that follow us around online based on our recent clicks on commercial Web sites. But increasingly, and nearly invisibly, our searches for information are being personalized too. Two people who each search on Google for “Egypt” may get significantly different results, based on their past clicks. Both Yahoo News and Google News make adjustments to their home pages for each individual visitor. And just last month, this technology began making inroads on the Web sites of newspapers like The Washington Post and The New York Times.
All of this is fairly harmless when information about consumer products is filtered into and out of your personal universe. But when personalization affects not just what you buy but how you think, different issues arise. Democracy depends on the citizen’s ability to engage with multiple viewpoints; the Internet limits such engagement when it offers up only information that reflects your already established point of view. While it’s sometimes convenient to see only what you want to see, it’s critical at other times that you see things that you don’t.
Like the old gatekeepers, the engineers who write the new gatekeeping code have enormous power to determine what we know about the world. But unlike the best of the old gatekeepers, they don’t see themselves as keepers of the public trust. There is no algorithmic equivalent to journalistic ethics.
Mark Zuckerberg, Facebook’s chief executive, once told colleagues that “a squirrel dying in your front yard may be more relevant to your interests right now than people dying in Africa.” At Facebook, “relevance” is virtually the sole criterion that determines what users see. Focusing on the most personally relevant news — the squirrel — is a great business strategy. But it leaves us staring at our front yard instead of reading about suffering, genocide and revolution.
There’s no going back to the old system of gatekeepers, nor should there be. But if algorithms are taking over the editing function and determining what we see, we need to make sure they weigh variables beyond a narrow “relevance.” They need to show us Afghanistan and Libya as well as Apple and Kanye.
Companies that make use of these algorithms must take this curative responsibility far more seriously than they have to date. They need to give us control over what we see — making it clear when they are personalizing, and allowing us to shape and adjust our own filters. We citizens need to uphold our end, too — developing the “filter literacy” needed to use these tools well and demanding content that broadens our horizons even when it’s uncomfortable.
It is in our collective interest to ensure that the Internet lives up to its potential as a revolutionary connective medium. This won’t happen if we’re all sealed off in our own personalized online worlds.
Eli Pariser, the president of the board of MoveOn.org, is the author of “The Filter Bubble: What the Internet Is Hiding From You.”
July 3, 2011
Tools of Entry, No Need for a Key Chain
By MATT RICHTEL and VERNE G. KOPYTOFF
SAN FRANCISCO — Front pockets and purses are slowly being emptied of one of civilization’s most basic and enduring tools: the key. It’s being swallowed by the cellphone.
New technology lets smartphones unlock hotel, office and house doors and open garages and even car doors.
It’s a not-too-distant cousin of the technology that allows key fobs to remotely unlock automobiles or key cards to be waved beside electronic pads at office entrances. What’s new is that it is on the device more people are using as the Swiss Army knife of electronics — in equal parts phone, memo pad, stereo, map, GPS unit, camera and game machine.
The phone simply sends a signal through the Internet and a converter box to a deadbolt or door knob. Other systems use internal company networks, like General Motors’ OnStar system, to unlock car doors.
Because nearly everyone has a cellphone, a number of start-ups, lock companies and carmakers are betting on broad acceptance of the technology.
July 21, 2011
Race to the Moon Heats Up for Private Firms
By KENNETH CHANG
Now that the last space shuttle has landed back on Earth, a new generation of space entrepreneurs would like to whip up excitement about the prospect of returning to the Moon.
Spurred by a $30 million purse put up by Google, 29 teams have signed up for a competition to become the first private venture to land on the Moon. Most of them are unlikely to overcome the financial and technical challenges to meet the contest deadline of December 2015, but several teams think they have a good shot to win — and to take an early lead in a race to take commercial advantage of our celestial neighbor.
At the very least, a flotilla of unmanned spacecraft could be headed Moonward within the next few years, with goals that range from lofty to goofy.
One Silicon Valley venture, Moon Express, is positioning itself as a future FedEx for Moon deliveries: if you have something to send there, the company would like to take it. Moon Express was having a party on Thursday night to show off the flight capabilities of its lunar lander, based on technology it licensed from NASA, and “to begin the next era of the private commercial race to the Moon,” as the invitation put it.
“In the near future, the Moon Express lunar lander will be mining the Moon for precious resources that we need here on Earth,” the invitation promised. “Years from now, we will all remember we were there.”
August 13, 2011
A Theory of Everything (Sort of)
By THOMAS L. FRIEDMAN
LONDON burns. The Arab Spring triggers popular rebellions against autocrats across the Arab world. The Israeli Summer brings 250,000 Israelis into the streets, protesting the lack of affordable housing and the way their country is now dominated by an oligopoly of crony capitalists. From Athens to Barcelona, European town squares are being taken over by young people railing against unemployment and the injustice of yawning income gaps, while the angry Tea Party emerges from nowhere and sets American politics on its head.
What’s going on here?
There are multiple and different reasons for these explosions, but to the extent they might have a common denominator I think it can be found in one of the slogans of Israel’s middle-class uprising: “We are fighting for an accessible future.” Across the world, a lot of middle- and lower-middle-class people now feel that the “future” is out of their grasp, and they are letting their leaders know it.
Why now? It starts with the fact that globalization and the information technology revolution have gone to a whole new level. Thanks to cloud computing, robotics, 3G wireless connectivity, Skype, Facebook, Google, LinkedIn, Twitter, the iPad, and cheap Internet-enabled smartphones, the world has gone from connected to hyper-connected.
This is the single most important trend in the world today. And it is a critical reason why, to get into the middle class now, you have to study harder, work smarter and adapt quicker than ever before. All this technology and globalization are eliminating more and more “routine” work — the sort of work that once sustained a lot of middle-class lifestyles.
The merger of globalization and I.T. is driving huge productivity gains, especially in recessionary times, where employers are finding it easier, cheaper and more necessary than ever to replace labor with machines, computers, robots and talented foreign workers. It used to be that only cheap foreign manual labor was easily available; now cheap foreign genius is easily available. This explains why corporations are getting richer and middle-skilled workers poorer. Good jobs do exist, but they require more education or technical skills. Unemployment today still remains relatively low for people with college degrees. But to get one of those degrees and to leverage it for a good job requires everyone to raise their game. It’s hard.
Think of what The Times reported last February: At little Grinnell College in rural Iowa, with 1,600 students, “nearly one of every 10 applicants being considered for the class of 2015 is from China.” The article noted that dozens of other American colleges and universities are seeing a similar surge as well. And the article added this fact: Half the “applicants from China this year have perfect scores of 800 on the math portion of the SAT.”
Not only does it take more skill to get a good job, but for those who are unable to raise their games, governments no longer can afford generous welfare support or cheap credit to be used to buy a home for nothing down — which created a lot of manual labor in construction and retail. Alas, for the 50 years after World War II, to be a president, mayor, governor or university president meant, more often than not, giving things away to people. Today, it means taking things away from people.
All of this is happening at a time when this same globalization/I.T. revolution enables the globalization of anger, with all of these demonstrations now inspiring each other. Some Israeli protestors carried a sign: “Walk Like an Egyptian.” While these social protests — and their flash-mob, criminal mutations like those in London — are not caused by new technologies per se, they are fueled by them.
This globalization/I.T. revolution is also “super-empowering” individuals, enabling them to challenge hierarchies and traditional authority figures — from business to science to government. It is also enabling the creation of powerful minorities and making governing harder and minority rule easier than ever. See dictionary for: “Tea Party.”
Surely one of the iconic images of this time is the picture of Egypt’s President Hosni Mubarak — for three decades a modern pharaoh — being hauled into court, held in a cage with his two sons and tried for attempting to crush his people’s peaceful demonstrations. Every leader and C.E.O. should reflect on that photo. “The power pyramid is being turned upside down," said Yaron Ezrahi, an Israeli political theorist.
So let’s review: We are increasingly taking easy credit, routine work and government jobs and entitlements away from the middle class — at a time when it takes more skill to get and hold a decent job, at a time when citizens have more access to media to organize, protest and challenge authority and at a time when this same merger of globalization and I.T. is creating huge wages for people with global skills (or for those who learn to game the system and get access to money, monopolies or government contracts by being close to those in power) — thus widening income gaps and fueling resentments even more.
Put it all together and you have today’s front-page news.
Massive Biometric Project Gives Millions of Indians an ID
By Vince Beiser Email Author
August 19, 2011 |
1:27 pm |
Wired September 2011
"Kiran has never touched or even seen a real computer, let alone an iris scanner. She thinks she’s 32, but she’s not sure exactly when she was born. Kiran has no birth certificate, or ID of any kind for that matter—no driver’s license, no voting card, nothing at all to document her existence. Eight years ago, she left her home in a destitute farming village and wound up here in Mongolpuri, a teeming warren of shabby apartment blocks and tarp-roofed shanties where grimy barefoot children, cargo bicycles, haggard dogs, goats, and cows jostle through narrow, trash-filled streets. Kiran earns about $1.50 a day sorting cast-off clothing for recycling. In short, she’s just another of India’s vast legions of anonymous poor.
Now, for the first time, her government is taking note of her. Kiran and her children are having their personal information recorded in an official database—not just any official database, but one of the biggest the world has ever seen. They are the latest among millions of enrollees in India’s Unique Identification project, also known as Aadhaar, which means “the foundation” in several Indian languages. Its goal is to issue identification numbers linked to the fingerprints and iris scans of every single person in India.
That’s more than 1.2 billion people—everyone from Himalayan mountain villagers to Bangalorean call-center workers, from Rajasthani desert nomads to Mumbai street beggars—speaking more than 300 languages and dialects. The biometrics and the Aadhaar identification number will serve as a verifiable, portable, all but unfakable national ID. It is by far the biggest and most technologically complicated biometrics program ever attempted."
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot vote in polls in this forum