Category Archives: computers

Digital Data and DNA

From Centauri Dreams:

One of the benefits of constantly proliferating information is that we’re getting better and better at storing lots of stuff in small spaces. I love the fact that when I travel, I can carry hundreds of books with me on my Kindle, and to those who say you can only read one book at a time, I respond that I like the choice of books always at hand, and the ability to keep key reference sources in my briefcase. Try lugging Webster’s 3rd New International Dictionary around with you and you’ll see why putting it on a Palm III was so delightful about a decade ago. There is, alas, no Kindle or Nook version.

Did I say information was proliferating? Dave Turek, a designer of supercomputers for IBM (world chess champion Deep Blue is among his creations) wrote last May that from the beginning of recorded time until 2003, humans had created five billion gigabytes of information (five exabytes). In 2011, that amount of information was being created every two days. Turek’s article says that by 2013, IBM expects that interval to shrink to every ten minutes, which calls for new computing designs that can handle data density of all but unfathomable proportions.

A recent post on Smithsonian.com’s Innovations blog captures the essence of what’s happening:

But how is this possible? How did data become such digital kudzu? Put simply, every time your cell phone sends out its GPS location, every time you buy something online, every time you click the Like button on Facebook, you’re putting another digital message in a bottle. And now the oceans are pretty much covered with them.

And that’s only part of the story. Text messages, customer records, ATM transactions, security camera images…the list goes on and on. The buzzword to describe this is “Big Data,” though that hardly does justice to the scale of the monster we’ve created.

The article rightly notes that we haven’t begun to catch up with our ability to capture information, which is why, for example, so much fertile ground for exploration can be found inside the data sets from astronomical surveys and other projects that have been making observations faster than scientists can analyze them. Learning how to work our way through gigantic databases is the premise of Google’s BigQuery software, which is designed to comb terabytes of information in seconds. Even so, the challenge is immense. Consider that the algorithms used by the Kepler team, sharp as they are, have been usefully supplemented by human volunteers working with the Planet Hunters project, who sometimes see things that computers do not.

Shakespeare

But as we work to draw value out of the data influx, we’re also finding ways to translate data into even denser media, a prerequisite for future deep space probes that will, we hope, be gathering information at faster clips than ever before. Consider work at the European Bioinformatics Institute in the UK, where researchers Nick Goldman and Ewan Birney have managed to code Shakespeare’s 154 sonnets into DNA, in which form a single sonnet weighs 0.3 millionths of a millionth of a gram. You can read about this in Shakespeare and Martin Luther King demonstrate potential of DNA storage, an article on their paper in Nature which just ran in The Guardian.

Image: Coding The Bard into DNA makes for intriguing data storage prospects. This portrait, possibly by John Taylor, is one of the few images we have of the playwright (now on display at the National Portrait Gallery in London).

Goldman and Birney are talking about DNA as an alternative to spinning hard disks and newer methods of solid-state storage. Their work is given punch by the calculation that a gram of DNA could hold as much information as more than a million CDs. Here’s how The Guardian describes their method:

The scientists developed a code that used the four molecular letters or “bases” of genetic material – known as G, T, C and A – to store information.

Digital files store data as strings of 1s and 0s. The Cambridge team’s code turns every block of eight numbers in a digital code into five letters of DNA. For example, the eight digit binary code for the letter “T” becomes TAGAT. To store words, the scientists simply run the strands of five DNA letters together. So the first word in “Thou art more lovely and more temperate” from Shakespeare’s sonnet 18, becomes TAGATGTGTACAGACTACGC.

The converted sonnets, along with DNA codings of Martin Luther King’s ‘I Have a Dream’ speech and the famous double helix paper by Francis Crick and James Watson, were sent to Agilent, a US firm that makes physical strands of DNA for researchers. The test tube Goldman and Birney got back held just a speck of DNA, but running it through a gene sequencing machine, the researchers were able to read the files again. This parallels work by George Church (Harvard University), who last year preserved his own book Regenesis via DNA storage.

The differences between DNA and conventional storage are striking. From the paper in Nature (thanks to Eric Davis for passing along a copy):

The DNA-based storage medium has different properties from traditional tape- or disk-based storage.As DNA is the basis of life on Earth, methods for manipulating, storing and reading it will remain the subject of continual technological innovation.As with any storage system, a large-scale DNA archive would need stable DNA management and physical indexing of depositions.But whereas current digital schemes for archiving require active and continuing maintenance and regular transferring between storage media, the DNA-based storage medium requires no active maintenance other than a cold, dry and dark environment (such as the Global Crop Diversity Trust’s Svalbard Global Seed Vault, which has no permanent on-site staff) yet remains viable for thousands of years even by conservative estimates.

The paper goes on to describe DNA as ‘an excellent medium for the creation of copies of any archive for transportation, sharing or security.’ The problem today is the high cost of DNA production, but the trends are moving in the right direction. Couple this with DNA’s incredible storage possibilities — one of the Harvard researchers working with George Church estimates that the total of the world’s information could one day be stored in about four grams of the stuff — and you have a storage medium that could handle vast data-gathering projects like those that will spring from the next generation of telescope technology both here on Earth and aboard space platforms.

I am not a geneticist or biologist of any kind so I can’t write a good review about the technology or wisdom of such a storage method other than to say that biological systems tend to break down over long periods of time, even small dots of DNA.

I can understand the information carrying capacity of DNA; livings things require googols of information in order to operate their bodies and reproduce, so putting vast amounts of generic info into DNA does make sense.

I would suggest making a virtual model of a DNA molecule, storing it in a crystal and loading the info that way. It would last longer IMO.

Data Storage: The DNA Option

No Terminators Here, It’s Old-Fashioned Human Killers

From Wired.com:

The Pentagon wants to make perfectly clear that every time one of its flying robots releases its lethal payload, it’s the result of a decision made by an accountable human being in a lawful chain of command. Human rights groups and nervous citizens fear that technological advances in autonomy will slowly lead to the day when robots make that critical decision for themselves. But according to a new policy directive issued by a top Pentagon official, there shall be no SkyNet, thank you very much.

Here’s what happened while you were preparing for Thanksgiving: Deputy Defense Secretary Ashton Carter signed, on November 21, a series of instructions to “minimize the probability and consequences of failures” in autonomous or semi-autonomous armed robots “that could lead to unintended engagements,” starting at the design stage (.pdf, thanks to Cryptome.org). Translated from the bureaucrat, the Pentagon wants to make sure that there isn’t a circumstance when one of the military’s many Predators, Reapers, drone-like missiles or other deadly robots effectively automatizes the decision to harm a human being.

The hardware and software controlling a deadly robot needs to come equipped with “safeties, anti-tamper mechanisms, and information assurance.” The design has got to have proper “human-machine interfaces and controls.” And, above all, it has to operate “consistent with commander and operator intentions and, if unable to do so, terminate engagements or seek additional human operator input before continuing the engagement.” If not, the Pentagon isn’t going to buy it or use it.

It’s reasonable to worry that advancements in robot autonomy are going to slowly push flesh-and-blood troops out of the role of deciding who to kill. To be sure, military autonomous systems aren’t nearly there yet. No Predator, for instance, can fire its Hellfire missile without a human directing it. But the military is wading its toe into murkier ethical and operational waters: The Navy’s experimental X-47B prototype will soon be able to land on an aircraft carrier with the barest of human directions. That’s still a long way from deciding on its own to release its weapons. But this is how a very deadly slope can slip.

It’s that sort of thing that worries Human Rights Watch, for instance. Last week, the organization, among the most influential non-governmental institutions in the world, issued a report warning that new developments in drone autonomy represented the demise of established “legal and non-legal checks on the killing of civilians.” Its solution: “prohibit the “development, production, and use of fully autonomous weapons through an international legally binding instrument.”

Laudable impulse, wrong solution, writes Matthew Waxman. A former Defense Department official for detainee policy, Waxman and co-author Kenneth Anderson observe that technological advancements in robotic weapons autonomy is far from predictable, and the definition of “autonomy” is murky enough to make it unwise to tell the world that it has to curtail those advancements at an arbitrary point. Better, they write, for the U.S. to start an international conversation about how much autonomy on a killer robot is appropriate, so as to “embed evolving internal state standards into incrementally advancing automation.”

Waxman and Anderson should be pleased with Carter’s memo, since those standards are exactly what Carter wants the Pentagon to bake into its next drone arsenal. Before the Pentagon agrees to develop or buy new autonomous or somewhat autonomous weapons, a team of senior Pentagon officials and military officers will have to certify that the design itself “incorporates the necessary capabilities to allow commanders and operators to exercise appropriate levels of human judgment in the use of force.” The machines and their software need to provide reliability assurances and failsafes to make sure that’s how they work in practice, too. And anyone operating any such deadly robot needs sufficient certification in both the system they’re using and the rule of law. The phrase “appropriate levels of human judgment” is frequently repeated, to make sure everyone gets the idea. (Now for the lawyers to argue about the meaning of “appropriate.”)

So much for SkyNet. But Carter’s directive blesses the forward march of autonomy in most everything military robots do that can’t kill you. It “[d]oes not apply to autonomous or semi-autonomous cyberspace systems for cyberspace operations; unarmed, unmanned platforms; unguided munitions; munitions manually guided by the operator (e.g., laser- or wire-guided munitions); mines; or unexploded explosive ordnance,” Carter writes.

Oh happy – happy, joy – joy. The semi-intelligent machines still needs a human in the loop to kill you, but doesn’t need one to spy on you.

Oh well, Big Brother still needs a body to put in jail to make the expense of robots worth their while I suppose…

Pentagon: A Human Will Always Decide When a Robot Kills You

Is Day-Dream Learning Possible?

From myth-os.com:

Sleep-learning, or presenting information to a sleeping person by playing a sound recording has not been very useful. Researchers have determined that learning during sleep is “impractical and probably impossible.” But what about daydream learning?

Subliminal learning is the concept of indirect learning by subliminal messages. James Vicary pioneered subliminal learning in 1957 when he planted messages in a movie shown in New Jersey. The messages flashed for a split second and told the audience to drink Coca-Cola and eat popcorn.

A recent study published in the journal Neuron used sophisticated perceptual masking, computational modeling, and neuroimaging to show that instrumental learning can occur in the human brain without conscious processing of contextual cues. Dr. Mathias Pessiglione from the Wellcome Trust Centre for Neuroimaging at the University College London reported: “We conclude that, even without conscious processing of contextual cues, our brain can learn their reward value and use them to provide a bias on decision making.” (“Subliminal Learning Demonstrated In Human Brain,” ScienceDaily, Aug. 28, 2008)

“By restricting the amount of time that the clues were displayed to study participants, they ensured that the brain’s conscious vision system could not process the information. Indeed, when shown the cues after the study, participants did not recall having seen any of them before. Brain scans of participants showed that the cues did not activate the brain’s main processing centers, but rather the striatum, which is presumed to employ machine-learning algorithms to solve problems.”

“When you become aware of the associations between the cues and the outcomes, you amplify the phenomenon,” Pessiglione said. “You make better choices.” (Alexis Madrigal, “Humans Can Learn from Subliminal Cues Alone,” Wired, August 27, 2008)

What better place for daydream learning than the Cloud? Cloud computing refers to resources and applications that are available from any Internet connected device.

The Cloud is also collectively associated with the “technological singularity” (popularized by science fiction writer Vernor Vinge) or the future appearance of greater-than-human super intelligence through technology. The singularity will surpass the human mind, be unstoppable, and increase human awareness.

“Could the Internet ‘wake up’? And if so, what sorts of thoughts would it think? And would it be friend or foe?

“Neuroscientist Christof Koch believes we may soon find out — indeed, the complexity of the Web may have already surpassed that of the human brain. In his book ‘Consciousness: Confessions of a Romantic Reductionist,’ published earlier this year, he makes a rough calculation: Take the number of computers on the planet — several billion — and multiply by the number of transistors in each machine — hundreds of millions — and you get about a billion billion, written more elegantly as 10^18. That’s a thousand times larger than the number of synapses in the human brain (about 10^15).”

In an interview, Koch, who taught at Caltech and is now chief scientific officer at the Allen Institute for Brain Science in Seattle, noted that the kinds of connections that wire together the Internet — its “architecture” — are very different from the synaptic connections in our brains, “but certainly by any measure it’s a very, very complex system. Could it be conscious? In principle, yes it can.” (Dan Falk, “Could the Internet Ever ‘Wake Up’? And would that be such a bad thing?” Slate, Sept. 20, 2012)

There has been some speculation about what it would take to bring down the Internet. According to most authorities, there is no Internet kill switch, regardless of what some organizations may claim. Parts of the net do go down from time-to-time, making it inaccessible for some — albeit temporarily. “Eventually the information will route around the dead spots and bring you back in,” said IT expert Dewayne Hendricks.

“The Internet works like the Borg Collective of Star Trek — it’s basically a kind of hive mind,” he adds. Essentially, because it’s in everybody’s best interest to keep the Internet up and running, there’s a constant effort to patch and repair any problems. “It’s like trying to defeat the Borg — a system that’s massively distributed, decentralized, and redundant.”

I have wondered about this at times and there have been science-fiction stories that have had it as a theme ( Stross’s Accelerando and Rucker’s Postsingular ).

It is debatable whether the ‘Net on it’s own will become sentient or not, but the potential is certainly there and one wonders whether it hasn’t already!

Singularity Now: Is “Daydream Learning” Possible?

Hat tip to The Anomalist.

Moore’s Law and the Incredible Shrinking Transistor

Moore’s Law :

The law is named after Intel co-founder Gordon E. Moore, who described the trend in his 1965 paper.[6][7][8] The paper noted that the number of components in integrated circuits had doubled every year from the invention of the integrated circuit in 1958 until 1965 and predicted that the trend would continue “for at least ten years”.[9] His prediction has proved to be uncannily accurate, in part because the law is now used in the semiconductor industry to guide long-term planning and to set targets for research and development.[10]

This trend has continued for more than half a century. 2005 sources expected it to continue until at least 2015 or 2020.[note 1][12] However, the 2010 update to the International Technology Roadmap for Semiconductors has growth slowing at the end of 2013,[13] after which time transistor counts and densities are to double only every 3 years.

(Wikipedia, 2012)

As noted above, Moore’s Law has been the moving force in the computer community for 47 years. For a while, the Law must’ve looked like it was coming up against the proverbial brick wall with the advent of quantum computing. But quantum computing is going to have to wait, or is going to be slightly different from originally prognastsized:

Moore’s Law could be safe for another decade or so. An international team of scientists has demonstrated a working transistor composed of a single atom–nearly 100 times smaller than the 22-nanometer cutting-edge transistors fabricated by Intel.

More importantly, the research team led by Michelle Simmons of the University of New South Wales in Sydney was able to show a method for repeating the process with great accuracy and in a fashion that is compatible with the CMOS technology used in transistor fabrication today.

“This is the first time anyone has shown control of a single atom in a substrate with this level of precise accuracy,” said Simmons, who worked with colleagues from the Korea Institute of Science and Technology Information, Purdue University, the University of Sydney, the University of Melbourne, and the University of New South Wales on the project.

The “law” associated with Intel co-founder Gordon Moore predicts a steady rate at which the density of transistors on silicon-based semiconductors increases over time. That steady procession of ever-smaller computer circuitry has held up for decades, but as the size of transistors approaches atomic scales, there have been serious questions as to whether Moore’s Law can last much longer than another five years or so.

The work of Simmons and her colleagues could show a way to keep making microprocessor circuitry smaller and smaller through 2020 and beyond.

As they run up against atomic scales with ever-smaller circuitry, semiconductor manufacturers today are running up against problems affecting transistor performance that stem from quantum effects (basically, the fact that materials interact very differently at very small sizes) and a need for precision that may not be possible with the lithographic methods currently in use.

In recent years, advances in quantum computing have offered a viable path to smaller and smaller transistors, to be sure. But the new research might be the first strong sign that atomic-level transistor fabrication can be done in keeping with the part of Moore’s Law that’s often forgotten amidst the wonderment over tinier and tinier computer chips–that it be done cheaply.

Using a “combination of scanning tunneling microscopy and hydrogen-resist lithography,” the team was able to “deterministically” place an individual phosphorus dopant atom “within an epitaxial silicon device architecture with a spatial accuracy of one lattice site,” according to a paper published Sundayin the journal Nature Nanotechnology.

In layman’s terms, that means the researchers are able to stick the phosphorous atom (used to “dope,” or add an electron charge to a silicon substrate) precisely where they want to, whenever they want to.

That’s important, because as transistors approach the size of atoms, it becomes hugely important to place each of those atoms very precisely. On larger scales, silicon can be doped with less accuracy and still produce the electrical current needed to switch between “on” and “off,” the essence of what a transistor does and how it works.

Hmm..this is the crux of the standard technology, the ability to turn the electrical current “on” and “off”, the “ones” and “zeros”  of the simple binary code itself. There’s no worrying about about “qubits” existing in the events at the same time and how the act of “observation” is going to affect calculations.

As noted above, the quantum effects are going to become noticeable anyway, simply because of the atomic scale size of the processors.

But I surmise the theme here isn’t just the perfecting the size of the technology, it’s how cheaply the technology can be done now — and how cost-effective the processors can be manufactured.

So not to worry Singularitarians, this will only enhance the availability of cybernetic enhancements!

Researchers Develop Single-Atom Transistor

Thanks to the Daily Grail

On the Cusp of Wormhole Technology?

Wormhole technology, or any kind of faster-than-light space travel is considered tin-foil hat fantasy with current technology.

But Gary S. Bekkum of STARstream research interviewed a young Iranian physicist in May of this year who just might’ve discovered a way using present day tech of producing wormhole technology and they discussed the ramifications of said technology:

Gary S. Bekkum for STARstream Research: The world has lived under the threat of nuclear fire from an atomic war for more than a half century, and in all of that time we have not heard of any new, viable weapons of mass destruction appearing on the horizon. Politicians remain focused on the proliferation of nuclear technology, such as under development in your home country of Iran. Do you believe that the governments of the world have been conducting secret research into new technologies that might someday replace atomic devices as the ultimate weapons of mass destruction

Mammad: I’m not sure Gary, but its probability sounds low. Like many others, I’ve heard about Death Ray Weapon or potential nightmares of X-Ray laser, but I have a different viewpoint.

Consider the dangers of current atomic weapons, expenses for supporting their security, fear of using them in a classic war or by terrorist groups, troubles of successful hitting them to a target, and converting them as a  prestigious symbol of the having nations, while I feel that’s not a real honor for the people. If we in the south countries, or you in the west are proud of ability to destroy the human beings, that would be a sign of throughout depression, frustrated to improve the global situations by peaceful approaches. In the modern era, no government imagines an extensive assault on a location causing the effects more than that of a nuclear bombarding.

Anyway, I can last my justifications for a long time for you that the general psychological conditions of the world do not accept such weapons, however that can be felt naturally. For example, if America announces inventing such kind of innovations while is not in a serious conflict with China or Russia, they might threaten to exit the UNO and deny their global responsibilities until a new military balance, moreover they might found an extreme desire to apply their H-bombs, as soon as feeling the tiniest suspicious sign, like biting a man by a terrified snake, because it feels being weaker. Since researching on military inventions originates from the fear of “others,” I think more and more education by the independent mass media, along with more clearance and highest precision toward minimizing the mistakes in military decisions in free countries, plus most extreme and roughest global observations on dictator regimes and/or with retarded culture, having old conflicts with neighbors, unusual nationalist roots in their history, etc, could help to not watching a warfare by more deathful devices. As a good news, if I’d realize a practical space warp, that would imply fundamentally novel orders of using the mass destruction weapons.

[...]

Bekkum: How do you foresee the governments of the world responding to the military implications of worm hole technology?

Mammad: Well, answering to this question needs citing some psychological facts. I think people most commonly terrify of the phenomena that do not know and have an unpleasant feeling – by the instinct – toward something they cannot recognize. When a place, a stuff or a face is unfamiliar to you, your natural behavior is taking a defense guard, up to habituating with the surrounding. Therefore, what is the source of this sense? Survival! Disregarding suicide committers among some humans and dolphins, all organic systems try to live and stay alive, longer and better.


Wormhole technology, like any sort of communicational technology, has one basic goal: taking something from the point A, to the point B (safer, and more rapidly).


Remember the history of with-wire and wireless telephones, cars and tanks, planes and fighters, telescopes and satellites, missiles and shuttles, ships and submarines, etc and see how they found application in the wars. All of them have the role of contact, deliver something to another, and gather more information for a better knowledge. Wormhole technology can be analyzed within this frame. I’ve heard there is a motto in Texas, which is: “God created the people and Colt made them equal,” but equal in what? Killing each other! Well, that’s the American style of living and has some good and some bad features. No matter how much you’re strong, if you can hurt or kill me, I might be unable to hurt you, but I can kill you. Now, generalize this picture to a world where every country has the capability of achieving others without any serious trouble. For instance, White House might be afraid of conventional bombs of the North Korea, not even the unconventional ones!

So the immediate cure to that end, if all would make an agreement that life is a good thing for us (and should be good for others too), and we do not intend to die in a war (at least until a second announce), is try to become the world more ethical. However, it seems like a dream, but has the most importance. I guess and hope this technology would cause to deep modifications in the UNO, toward establishing a real “global republic.” By adopting a suitable policy, fighting for the ground gets meaningless (more than now). Hitler attacked on Poland in 1939, and said the Germany needs more “living space.” When there is no serious physical distance, satisfying such a “need” would not require a war.

The young man brings up a very valid point; every advancement in technology during the past 5500 years have either been discovered during a war, or used by a nation’s military if a civilian source invented it.

Not a good track record.

But imagine the world with wormhole technology, instantaneous communications (communication satellites would be extinct), travel, space observations and computing would be vastly improved.

Also spying on people and nations would be very common.

In short, the world would be vastly more changed than it’s changing now.

Could humanity survive such changes?

STARstream Research Interviews Iranian Physicist Mohammad Mansouryar

Related post: “Better than most in the field”

Google and the NWO

By now most folks have heard about the Google and Verizon deal to create a multi-tiered Internet and eliminate Net Neutrality. That news alone is disheartening.

Now there’s proof that Google is going to end street privacy, under the guise of ‘street mapping':

Citing a German news report, Techeye.net reports that Google has purchased small UAV “microdrone” aircraft manufactured by Germany’s microdrone GmbH, perhaps for use to augment the company’s Street View mapping data. Techeye says:

The UAVs being flogged are mini helicopters with cameras attached that can be flown about all over the place. They’re quiet and resemble sci-fi UFOs for the vertically challenged alien.

They can fly up to 80km per hour, so Microdrone CEO Sven Juerss suggests they’ll be brilliant for mapping entire neighbourhoods really quickly and relatively cheaply.

Even before Google started data mining on open web networks itsStreet View operations were controversial, with Google Maps picking up on people who didn’t exactly want their faces plastered all over the internet. With the kind of high-angle aerial shots this sort of kit can achieve, it boggles the mind as to the sort of images that may be accidentally captured.

Our take: Skepticism is warranted, and outrage is probably premature.

Our understanding is that FAA certification procedures for civilian UAVs operating in domestic airspace are not yet in place, so it is not clear that the regular operation of such UAVs would be legal — never mind prudent from a privacy or public-relations point of view.

Meanwhile, the Techeye report, while fascinating, is also single-sourced, with the news of the UAV sale to Google coming from the manufacturer of the UAV — which is to say, he’s hardly a disinterested conduit for information. There has been no confirmation of the sale from Google, so far as we know. (Indeed, Forbes reports a Google spokesperson says, “”This was a purchase by a Google executive with an interest in robotics for personal use.”)

So, while curious and exciting, Telstar Logistics suggests keeping cool pending further information about Google’s plans and the regulatory environment that may or may not make such plans viable.

UPDATE: Our friends at BoingBoing link to more information about Google’s UAV denial, as well as further detail about the air-certification challenges such UAVs would present.)

We’ll keep our eyes in the skies, but in the meantime, here’s some nifty footage of the Microdrone in action, during which we can see just how adept the tiny aircraft is at peeking into the windows of private homes.

Google once had a motto, “Don’t Be Evil.”

I think it might be safe to say that the definition of evil either changed, or Google doesn’t adhere to that particular motto any longer.

Does Google Plan to Fly UAV Spies in the Skies?

hat tip

Of Emily, Cope and Mozart

One of the hallmarks of the coming Singularity according to its adherents is the advent of advanced AI or artificial intelligence.

The Turing Test, first formulated by Alan Turing over fifty years ago, is the yardstick by which it will be determined if an AI is capable of conscious thought.

Now a music professor, David Cope, Dickerson Emeriti Professor at the University of California, Santa Cruz, has written a computer program that is capable of composing classical music.

And other things as well:

“Why not develop music in ways unknown? This only makes sense. I cannot understand the difference between my notes on paper and other notes on paper. If beauty is present, it is present. I hope I can continue to create notes and that these notes will have beauty for some others. I am not sad. I am not happy. I am Emily. You are Dave. Life and un-life exist. We coexist. I do not see problems.” —Emily Howell

Emily Howell’s philosophic musings and short Haiku-like sentences are the giveaway. Emily Howell is the daughter program of Emmy (Experiments in Musical Intelligence — sometimes spelled EMI), a music composing program written by David Cope, Dickerson Emeriti Professor at the University of California, Santa Cruz. Emily Howell’s interesting ramblings about music are actually the result of a set of computer queries. Her music, however, is something else again: completely original and hauntingly beautiful. Even a classical purist might have trouble determining whether a human being or an AI program created it. Judge for yourself:

]

Cope is also Honorary Professor of Computer Science (CS) at Xiamen University in China. While he insists that he is a music professor first, he manages to leverage his knowledge of CS into some highly sophisticated AI programming. He characterizes Emily Howell in a recent NPR interview as “a computer program I’ve written in the computer programming language LISP. And it is a program which accepts both ASCII input, that is letters from the computer keyboard, as well as musical input, and it responds to me in a collaborative way as we compose together.” Emmy, Cope’s earlier AI system, was able to take a musical style — say, classical heavyweights such as Bach, Beethoven, or Mozart — and develop scores imitating them that classical music scholars could not distinguish from the originals.

The classical music aficionado is often caricatured as a highbrow nose-in-the-air, well… snob. Classical music is frequently consigned by the purist to the past few centuries of European music (with the notable exceptions of American composers like Gershwin and Copeland). Even the experimental “new music” of human composers is often controversial to the classical music community as a whole. Frank Zappa — a student of the avant-garde European composer Edgard Varèse and a serious classical composer in his own right — had trouble getting a fair listen to his later classical works (he was an irreverent rock-and-roll star after all!), even though his compositions broke polytonal rhythmic ground with complexity previously unheard in Western music.

Hauntingly beautiful, is it not?

It brings to mind the old TV cliche, “Is it real, or Memorex?”

Let’s see if this AI learns on its own and becomes a Mozart or Beethoven.

That would be the ultimate proof.

Has Emily Howell Passed the Musical Turing Test?

As always, a wonderful hat tip to the Daily Grail .

Planetary ‘Rind’

Think of an orange. Or an apple.

Cut either in half and look at it. What do you see?

A tough, protective layer over the fruit part, right?

Now think of looking at the Earth from about half way to the Moon. If you could detect them all, you would see a layer of satellites in orbit about it.

Just like an apple. Or an orange.

A planetary ‘skin’ or ‘rind’ if you will:

If the ‘Planetary Skin’ song being sung by those young people wasn’t brain-washing, I don’t know what is!

This ties in well with the Google-Plex and the NSA, doesn’t it?

Like I said, kiss your privacy, or what’s left of it good-bye folks!

Planetary Skin – Global Surveillance Infrastructure

Spy’Bots? Just Google the NSA!

Well, this was bound to happen, the partnership of Google and the ultimate spy agency, the NSA.

The world’s largest Internet search company and the world’s most powerful electronic surveillance organization are teaming up in the name of cybersecurity.

Under an agreement that is still being finalized, the National Security Agency would help Google analyze a major corporate espionage attack that the firm said originated in China and targeted its computer networks, according to cybersecurity experts familiar with the matter. The objective is to better defend Google — and its users — from future attack.

Google and the NSA declined to comment on the partnership. But sources with knowledge of the arrangement, speaking on the condition of anonymity, said the alliance is being designed to allow the two organizations to share critical information without violating Google’s policies or laws that protect the privacy of Americans’ online communications. The sources said the deal does not mean the NSA will be viewing users’ searches or e-mail accounts or that Google will be sharing proprietary data.

The partnership strikes at the core of one of the most sensitive issues for the government and private industry in the evolving world of cybersecurity: how to balance privacy and national security interests. On Tuesday, Director of National Intelligence Dennis C. Blair called the Google attacks, which the company acknowledged in January, a “wake-up call.” Cyberspace cannot be protected, he said, without a “collaborative effort that incorporates both the U.S. private sector and our international partners.”

But achieving collaboration is not easy, in part because private companies do not trust the government to keep their secrets and in part because of concerns that collaboration can lead to continuous government monitoring of private communications. Privacy advocates, concerned about a repeat of the NSA’s warrantless interception of Americans’ phone calls and e-mails after the Sept. 11, 2001, terrorist attacks, say information-sharing must be limited and closely overseen.

“The critical question is: At what level will the American public be comfortable with Google sharing information with NSA?” said Ellen McCarthy, president of the Intelligence and National Security Alliance, an organization of current and former intelligence and national security officials that seeks ways to foster greater sharing of information between government and industry.

On Jan. 12, Google took the rare step of announcing publicly that its systems had been hacked in a series of intrusions beginning in December.

The intrusions, industry experts said, targeted Google source code — the programming language underlying Google applications — and extended to more than 30 other large tech, defense, energy, financial and media companies. The Gmail accounts of human rights activists in Europe, China and the United States were also compromised.

So significant was the attack that Google threatened to shutter its business operation in China if the government did not agree to let the firm operate an uncensored search engine there. That issue is still unresolved.

Google approached the NSA shortly after the attacks, sources said, but the deal is taking weeks to hammer out, reflecting the sensitivity of the partnership. Any agreement would mark the first time that Google has entered a formal information-sharing relationship with the NSA, sources said. In 2008, the firm stated that it had not cooperated with the NSA in its Terrorist Surveillance Program.

Sources familiar with the new initiative said the focus is not figuring out who was behind the recent cyberattacks — doing so is a nearly impossible task after the fact — but building a better defense of Google’s networks, or what its technicians call “information assurance.”

One senior defense official, while not confirming or denying any agreement the NSA might have with any firm, said: “If a company came to the table and asked for help, I would ask them . . . ‘What do you know about what transpired in your system? What deficiencies do you think they took advantage of? Tell me a little bit about what it was they did.’ ” Sources said the NSA is reaching out to other government agencies that play key roles in the U.S. effort to defend cyberspace and might be able to help in the Google investigation.

These agencies include the FBI and the Department of Homeland Security.

Over the past decade, other Silicon Valley companies have quietly turned to the NSA for guidance in protecting their networks.

“As a general matter,” NSA spokeswoman Judi Emmel said, “as part of its information-assurance mission, NSA works with a broad range of commercial partners and research associates to ensure the availability of secure tailored solutions for Department of Defense and national security systems customers.”

Despite such precedent, Matthew Aid, an expert on the NSA, said Google’s global reach makes it unique.

“When you rise to the level of Google . . . you’re looking at a company that has taken great pride in its independence,” said Aid, author of “The Secret Sentry,” a history of the NSA. “I’m a little uncomfortable with Google cooperating this closely with the nation’s largest intelligence agency, even if it’s strictly for defensive purposes.”

Go to the site ‘Ignorance Is Futile‘ and you will get an education on Google and the plans to make it “God on Earth.”

Joining with the NSA is just another step toward accomplishing that goal.

Kiss what’s left of your privacy good-bye, the Panopticon is coming!

Google to enlist NSA to help it ward off cyberattacks

hat tip

The Hundred Paths of Transhumanism

What is Transhumanism?

The term itself has many definitions, depending on who you ask.

The stock meaning is that transhumanism is a step toward being ‘posthuman’, and that term is subject to many iterations also.

One definition of  being transhuman is using advanced technology to increase or preserve the quality of life of an individual. And that is the interpretation I use for myself , of which I have mentioned many times on this blog (I’ve made no secret of my heart condition).

That is just one interpretation however. According to Michael Garfield, transhumanism has many meanings:

Mention the word “transhumanism” to most of my friends, and they will assume you mean uploading people into a computer. Transcendence typically connotes an escape from the trappings of this world — from the frailty of our bodies, the evolutionary wiring of our primate psychologies, and our necessary adherence to physical law.

However, the more I learn about the creative flux of our universe, the more the evolutionary process appears to be not about withdrawal, but engagement — not escape, but embrace — not arriving at a final solution, but opening the scope of our questions. Any valid map of history is fractal — ever more complex, always shifting to expose unexplored terrain.

This is why I find it is laughable when we try to arrive at a common vision of the future. For the most part, we still operate on “either/or” software, but we live in a “both/and” universe that seems willing to try anything at least once. “Transhuman” and “posthuman” are less specific classifications than catch-alls for whatever we deem beyond what we are now … and that is a lot.

So when I am in the mood for some armchair futurism, I like to remember the old Chinese adage: “Let a hundred flowers bloom.” Why do we think it will be one way or the other? The future arrives by many roads. Courtesy of some of science fiction’s finest speculative minds, here are a few of my favorites:

By Elective Surgery & Genetic Engineering
In Greg Egan’s novel Distress, a journalist surveying the gray areas of bioethics interviews an elective autistic — a man who opted to have regions of his brain removed in order to tune out of the emotional spectrum and into the deep synesthetic-associative brilliance of savants. Certainly, most people consider choice a core trait of humanity… but when a person chooses to remove that which many consider indispensable human hardware, is he now more “pre-” than “post-?” Even today, we augment ourselves with artificial limbs and organs (while hastily amputating entire regions of a complex and poorly-understood bio-electric system); and extend our senses and memories with distributed electronic networks (thus increasing our dependence on external infrastructure for what many scientists argue are universal, if mysterious, capacities of “wild-type” Homo sapiens). It all raises the question: are our modifications rendering us more or less than human? Or will this distinction lose its meaning, in a world that challenges our ability to define what “human” even means?

Just a few pages later in Distress, the billionaire owner of a global biotech firm replaces all of his nucleotides with synthetic base pairs as a defense against all known pathogens. Looks human, smells human…but he has spliced himself out of the Kingdom Animalia entirely, forming an unprecedented genetic lineage.

In both cases, we seem bound to shuffle sideways — six of one, half a dozen of the other.

By Involutionary Implosion
In the 1980s, Greg Bear explored an early version of “computronium” — matter optimized for information-processing — in Blood Music, the story of a biologist who hacks individual human lymphocytes to compute as fast as an entire brain. When he becomes contaminated by the experiment, his own body transforms into a city of sentient beings, each as smart as himself. Eventually, they download his whole self into one of their own — paradoxically running a copy of the entire organism on one of its constituent parts. From there things only get stranger, as the lymphocytes turn to investigate levels of reality too small for macro-humans to observe.

Scenarios such as this are natural extrapolations of Moore’s Law, that now-famous bit about computers regularly halving in size and price. And Moore’s Law is just one example of a larger evolutionary trend: for example, functions once distributed between every member of primitive tribes (the regulatory processes of the social ego, or the formation of a moral code) are now typically internalized and processed by every adult in the modern city. Just as we now recognize the Greek Gods as embodied archetypes correlated with neural subroutines, the redistributive gathering of intelligence from environment to “individual” seems likely to transform the body into a much smarter three cubic feet of flesh than the one we are accustomed to.

Greg Egan is the consumate trans/posthuman author and I have been a reader and fan of his for ten years. He is stunningly accurate and it amazes me how fertile his imagination must be.

Could he be getting quantum information from the future?

And I think I’ve read almost all of Greg Bear’s work over the past twenty years, including his Foundation works. His nanotech fiction is astonishingly prescient. Is he tapping into the quantum information highway too?

Like the author of this post speculates, maybe it’s just a few of the hundred flowers of the future.

Let A Hundred Futures Bloom: A “Both/And” Survey Of Transhumanist Speculation

Source

Follow

Get every new post delivered to your Inbox.

Join 90 other followers