The current political and economic systems hold profits ahead of other considerations so that large corporations, like Koch industries, can abuse both their workers and the environment in ways which should be controlled by state intervention in the form of regulations. However an army of lobbyists and vested interests in both Washington and London have been pushing to deregulate wherever possible.

We have been witnessing a sinister political and ideological transformation on government controls. There appears to be a desire in various segments of society for less state steering and regulation to be replaced with ever further freedom for both the market and privatization. Reducing the size of the government is one of the structural changes which are focused on the reduction, cutting or even closing down of numerous existing policies for ideological, political or economic reasons.

Following the economic crisis of 2008, the intense economic austerity programs imposed by different governments affected different aspects of society, including the dismantling of various social benefits, pensions, and controls over air and water pollution. The scaling back was camouflaged as “efficiency savings”, “cutting red tape”, “reform”, “retrenchment”, or “deregulation.” Such linguistic variations were motivated by obfuscating politicians searching for blame avoidance.1

Last February, President Trump signed an executive order to place “regulatory reform” task forces and officers within federal agencies in an effort to pare down the massive red tape of recent decades. Then in another executive order, ‘Reducing Regulation and Controlling Regulatory Costs,’ called for all government agencies to eliminate two existing regulations every time a new one is issued. Furthermore, the cost of any new regulation had to be offset by the two being removed. This order was swiftly renamed “one step forward, two steps back,” by many of those working in public health as well as other public services.

The ideologist and initially Trump’s top strategy advisor, Stephen Bannon, announced early on that his goal was “the deconstruction of the administrative state.” Fortunately he was fired, but conservatives still hoped that funding for regulations such as the Clean Air Act would be reduced as would those of drug and food safety groups. Indeed, the White House withdrew or removed from consideration some 800 proposed regulations that had never been activated by the Obama administration. Trump then identified some 300 regulations related to energy production and environmental protection that were spread across the Environmental Protection Agency as well as the Interior and Energy Departments. White House budget director Mick Mulvaney said these measures were to “slow the cancer that had come from regulatory burdens that we put on our people.” (But there were representatives of the gas and oil industries who cheered.)

Yogin Kothari of the Union of Concerned Scientists countered that “Six months into the administration the only accomplishments the President has had is to rollback, delay and rescind science based safeguards.” The administration’s regulatory agenda revealed its objective. Kothari insisted that ”It continues to perpetuate a false narrative that regulations only have costs and no benefits.”

More broadly, “dismantling” incorporates a way of thinking. Neo-conservatives like
Richard Perle and David Frum a couple of decades ago declared that “A free society
is a self-policing society.” This was part of a larger drive to discredit the state as a source of redress for hardships. In the United Kingdom there were similar attacks from leaders of the Tory party who desired a new focus which emphasized greater community and local government powers. This has resulted, for example, in having established food safety structures quietly dismantled.

A special correspondent for The Guardian recently wrote that “Local authorities — a crucial pillar in the edifice since they have legal responsibility for testing foods sold in their area — are so starved of money that they have cut checking to the bone.”2 The result is that the Foods Standards Agency is in the process of rewriting much of the basis of food regulation in the United Kingdom and, as a consequence, commercial interests will be protected more than consumers. Big businesses, like supermarkets, will be pleased by privatized inspection and certification schemes which will lead to more “commercially astute” understanding. (Such as covering the sale of outdated foods such as chicken products.)

Lobbyists in England, as in the US, bait lawmakers as well as the national audience with plausible concerns. They suggest that “overreaching regulations” harass start-ups and small businesses. Educational and training requirements on a number of professions impose costs on low and middle-income workers striving for better positions. The lobbyists then proposed that stripping away regulations and consumer protections are the easiest ways to lower such costs. They ignore other solutions to lower the burdensome entry costs for those educationally enrolled.

I believe that there are genuine and rational reasons to question the construction of mountains of bureaucratic regulations. Now many of these regulations reflect serious concern about the environment, worker safety, pensions, health — well, about almost everything affecting human beings. I have long felt that common sense exercised on most issues regarding human welfare would be preferable to regulatory excesses.

Federal Laws like NEPA (National Environmental Protection Act) as well as state-level regulations and rules have ensured that citizens are protected from the harms of less responsible businesses and corporations. Environmental regulations prohibit these from disposing industrial wastes irresponsibly and serve to protect the health of both workers and communities. OSHA (The Occupational Safety and Health Administration) has some 3,500 specific provisions to cover the health and safety of construction workers. Detailed regulations on electronic job injuries and illness from air pollution in the work place impose fines and other sanctions to make it costly for irresponsible parties to act recklessly. However, much of such protective regulation is currently in jeopardy. Lobbyists and opponents in Congress suggest that publicly displaying information according to the injury requirements would unfairly damage the reputation of the employers. Pushing aside concerns of dangers to workers exposed to Silica and Beryllium, President Trump has been eager to roll back the executive order by President Obama in 2014 titled “Fair Pay and Safe Workplaces.”

The neoliberal program which has been envisioned aims to switch our values of “the public good and the public interest” to a value system based on “the market” and individual responsibility. Prof Sendhill Mullainathan, an economics professor at
Harvard, suggests that “New technologies are rattling the economy on all fronts. While the predictions are specific and dire, bigger changes are surely coming. Clearly we need to adjust for the turbulence ahead.” He believes that the neoliberal agenda could give way to a new focus which will incorporate an authoritarian mode of economics aimed at accountability and the “audit culture.” Mullainathan cautions: “A lifetime of work will be a lifetime of changing, moving between firms, jobs, careers and cities.” By-passing such purportedly creative destruction, he believes “we ought to enable innovation to take its course.”3 Such excuses for the unfettered pursuit of profit would end the system of protective regulations which have taken decades to develop. It seems obvious to me that regulation is essential for the democratic state. In our daily lives we drive our cars, take our pills, drink our water, and comfortably eat most foods because we take the safety regulations covering all these acts for granted.

France’s new President, Emmanuel Macron, has said “we need to rethink regulation, so as to deal with the excesses of globalized Capitalism.”4 The devious excesses of the current economic system manifestly threaten our future. By now, it should be clear to every voter and each citizen that deregulation is generally not in the public interest and should be fiercely resisted if we truly want advancement of the common good.

1Michael W. Bauer et al. Dismantling Public Policy, (2014) pp.30-56

2Felicity Lawrence, “Vital protections in are being dismantled,” The Guardian, August 25, 2017, p.31

3Sendhill Mullainathan, “Planning to cope with what you can’t forsee,” The New York Times, September 5, 2017

4“Regeneration,” The Economist, September 30, 2017, p.12



For those of you who accidentally received a first draft last week of this blog, this one is quite different in its overall perspective. You may be amused by the radical changes made.

As a member of the older generation, the changes I continue facing in everyday-life are historically unprecedented, wide-ranging, and in many ways controversial. A number are difficult to handle or to tolerate for many different age levels.

I find the continuing acceleration in the speed of change in life disturbing. Everybody is “busy” most of the time. We race from one place to another, spend too much time in traffic jams, rush through what we have to read, see on television and follow on our computers. Meals are cut short and Victorian style afternoon teas are no longer in fashion — they are too time consuming.

Through the dynamism of both technology and finance, we have changed not only the pace of life but also have altered its quality and direction. Money (that is, profits) has been the driving force of capitalism but almost no attention has been given to the effects on human beings which follow most innovations. In my last blog I focused on the unknown impact of iPads and tablets on infants. That was not the occasion to examine the possible impact of computers, mobiles and automation on adults.

What first comes to mind is what I am doing right now! The hours spent everyday on my computer are bad for my back, my eyes, my hands and my spirits. I still love writing with pen or pencil and find these wonderful, but slow and I, too, am often in a hurry. I am not on Facebook or the other social networks because they would intrude into my moments of leisure, time in the garden, or time to reflect.

So where can we take the currently uncontrolled and unplanned advances of technology which are popularly assumed may end with Artificial Intelligence? How to test the effects of automation on human beings as well as on entire societies? It is evident that as long as money/profits remain the prime driving force, there is little possibility of controlling the advance of untested but desirable technology-driven innovations for our brains and mental states.

Let me suggest that the pharmaceutical industry is a good example of what the Silicon valley giants could try to copy: In most countries almost all new medicines have to pass a variety of rigorous tests for their suitability on patients. If this difficult as well as bureaucratic program works effectively for protecting our physical health, why could different tests not be applied for the mental well-being of those subjected to electronically stimulated waves — ranging from head-sets to our everyday iPhones? We have little idea at the moment to what we are subjecting our brains (and hearts) and what the possibilities of damage there may be from many electronic devices.

On a broader perspective, some of the impacts of the new technology on the younger generation are evident: many no longer communicate in writing on paper and tend to stick to minimalism when it comes to expressing themselves. They even don’t like to use the telephone, regarding it as a medium of old-timers. I have been advised by a son that he no longer reads any email which extends beyond two terse paragraphs. As a writer, I find all of this poses cultural challenges which we could perhaps correct in schools and universities over time.

As a writer and former journalist, I am most disturbed by the newly popularized crisis of faith in journalism. The masses like to get the instant flow of events from Twitter and the online news organizations. What with the perverters of the truth, like Murdoch’s Sun newspaper in the UK and Fox News in the USA, the press increasingly gives readers the scandals they want rather than informing them of the events which might increase their knowledge or might be useful. For that matter, I have to confess that getting the Trump scenarios out of my mind is becoming an everyday challenge.

Even much of our economics are becoming unfathomable: Bit-Coins with their digital crypto-currencies make no sense. It seems that they are new instruments for gamblers, tax evaders, and high-tech risk takers rather than money to be used every day. Controls by governments of QE (Quantitative Easing) in which billions upon billions of dollars, pounds and other currencies have been pumped into bank reserves also seem most dubious. The whole QE process comes straight out of wonderland and tends to confuse minds, even in government, about reality.

I must balance these deep concerns with my expression of positive advances in so many areas. I am most enthusiastic about the giant greenhouses being based on the Eden Project in Cornwall. The co-founder, Sir Tim Smith, wants “to create oases of change… our job is to create a fever of excitement about the world that is ours to make better.” His group is now planning the construction of giant green-house domes in China, Australia and New Zealand.

I find the GPS of finding one’s way around the world as directed from outer space is a marvelous technological breakthrough, much as it may do away with our former ability to read maps. This is a variation of the impact that the technologies have on our abilities. When kids in schools some fifty years ago were given simple hand held adding machines, they quickly forgot how to do their sums.

The miracle cures for cancer exploiting the powers of genetics and our human immune systems are to be lauded. The related advances in gene editing techniques are promising extraordinary solutions to many of our genetically based illnesses. However, as with medicines, we should try to advance more carefully with intense examinations of the possible consequences rather than triumphantly announcing breakthroughs. The moral challenges we face with the introduction of gene editing must be dealt with enormous care and consideration. Our perspective of how to protect our minds after all these millennia of change and development must not be corrupted by the lure of money nor even by the competitive egos of leading scientists.1

Governments around the world are now planning to ban all diesel and petrol vehicles over the next 25+ years because the rising levels of nitrogen oxide present a major threat to public health as well as to climate change. If governments can do this on a cooperative basis, why can they not start research on whether the electronic products of ‘Silicon Valley” are affecting the mental and asocial imbalances of the population?

Thankfully, there are numerous aspects of our evolving cultures, like the above, which are greatly encouraging. I think it is most important to focus on these to bring greater hope to millions of people who have become deeply discouraged by the universal focus on capitalist competition, celebrity, and terrorism in this new millennium. I am advocating that the wonders of being alive on this incredible planet truly should be the basis for much of future optimism in the next generations.

1Yorick Blumenfeld, Towards the Millennium, (1996) pp. 421-428


I was rattled in a restaurant recently watching a couple encouraging their year-and-a-half toddler to slide a finger across his iPad. The little one was excited to see the changes on the screen. A few weeks later I observed a two-year-old grabbing his four-year-old brother’s iPad and operating it vengefully! For a few of the “advanced” members of this age group their first word is not “Mom” or “Dad” but “Pad.” Some toddlers even have become addicted to these electronic wonders! What is happening is that children are being subjected to unknown and untested challenges to their personal development.

There are numerous videos on YouTube of these little ones sliding their fingers across the pages of magazines lying on the kitchen table in an effort to activate them. Parents may wonder how the tablets affect their young ones, but most are pleased that quiet reigns in the house and they rationalize that even at this early stage of life their offspring are learning how to focus and develop their attention spans. However, some mothers and fathers are so fearful of the possible consequences that they have chosen to deny their kids access to these technological marvels.

“There is something important going on here and we need to learn what effects this is having on learning and attention, memory and social development,” says Jordy Kaufman, the director of BabyLab, one of the rare groups researching in this area, which is under the auspices of Australia’s Swinburne University.1 His team is trying to learn how the iPads and tablets affect the long-term mental development of the very young. His BabyLab is using innovative approaches to explore the cognitive as well social aspects of brain development in the very young.

The techniques at BabyLab include behavioral eye tracking, which measures observable changes in development, for example whether babies have a preference for faces over objects, as well as electro-physiological methods, which track changes that occur in brain activity when resting or responding to iPads. Toddlers do detect subtle changes. When they see something happening on the screen, like a change of color, an object in motion, or a face. The youngsters may empathize with what they observe. Their instant reaction is: “Is that me? Is that another?” For an instant they relate because that’s the way their brain is wired. Some of the very young may believe that the iPad is alive, but most intuitively accept that it is not.

While in depth studies have been made of the effects of television on the younger generation, very little research has been made on the effect of tablets on those in kindergartens. Indeed there may be benefits for the very young in developing motor skills as they learn to push buttons and softly slide their fingers. Their exposure to tablets may give them a kick-start to learning. However, Kaufman cautioned that “There is a school of thought that tablet use is rewiring children’s brains, so to speak, to make it difficult for them to attend to slower-paced information.”

Denying children access to iPads entails risks, contends Rose Flewitt who is doing research at the Institute for Education at the University of London. She studies how iPads can help literacy at the nursery and primary levels. “Having one section of society that is growing up with skills and one section that is growing up without it,” is problematic she posits. On the other hand, tablets and iPads do nothing to foster social skills for the very young.

The immediate response to pushing a button is highly satisfying and pleasurable for children who delight in the lights, images and sounds that emerge. There also are no admonishments coming from the iPads as well as a lack of any positive feedback. The electronic instruments are fast, dependable and soon become familiar. However, the cold glass, plastic and metal of the casings of the tablets provide only limited sensory experiences for the very young. There is none of the comfort provided by the traditional cuddlys and stuffies. The experts wonder whether a profound shift in childhood mindset may be taking place here without our understanding. It is appalling how little is known about the effects of the rapid and continuing educational technology advances these children now experience.

What is certain is that many of the new generations get hooked on the irresistibility of the swift educational technology advances. By the time they are teenagers they are likely to spend close to eight hours a day using electronics like computers, TV sets, smart-phones and iPads as most American 13-year-olds do today. However, I shall not wander into the more advanced levels of the $100 billion educational technology industry (which here encompasses the combined European and North American educational technology markets) which experiences continual development driven less by the needs of students and teachers than by the profit motive.

Over the past three decades we have seen that computers have been used to improve efficiency in the classrooms and keep pupils engaged, but they have not transformed learning in the way the promoters had predicted. It is basically unknown whether educational technology is advancing the potential of the new generation. The Economist contends that there has been a succession of inventions promising to overhaul education, but these have not done so yet. There has been little difference between the money spent on IT in schools and the abilities of 15-year-olds in maths, science and reading who have not received it.2

It seems evident to me that what is happening electronically at the early stages in the lives of children is now one of the basic aspects challenging their overall mental development. We simply don’t know how abandoning the reading of books, listening to stories, and other aspects of traditional education will affect future generations. Artificially personalized machine contacts are unlikely to match the look of the human eyes, the sound of a genuine voice, the scent of the adult, the warmth and familiarity of touch — all of which exert a personal impact whose combined effects on the psyche cannot be over-estimated. I believe we are putting our culture and entire civilization at great risk if we allow “the technological” to overwhelm “the human” during the introduction of the new generations into this world.

1Paula Cocozza, “Children of the Revolution,” The Guardian, January 9, 2014

2”Machine Learning,” The Economist, July 22, 2017, p.18


Even before this era of “fake news” and the easy willingness to mix lies and truth, I already was deeply concerned about the swift decline in our belief in ethical rights and wrongs. I accept that we may find it increasingly difficult, given the distractions of social media, to live by our traditional ethical guidelines. However, I feel strongly for the universal need to accept the principles of right and wrong which resonate within us.

Historically, morals, affiliations, and religions have all been dependent on strongly held convictions in right and wrong. Philosophers, beginning with Socrates (469-399BC), have long debated the foundations of moral decision-making. Socrates was one of the earliest of the Greek philosophers to focus on self-knowledge in such matters as right and wrong. He advanced the notion that human beings would naturally do good if they could rationally distinguish right from wrong. It followed that bad or evil actions were the consequence of ignorance. The assumption was that those who know what is right automatically do it. Socrates held that the truly wise would know what is right, do what is good, and enjoy the result. However his most famous pupil, Aristotle, held that to achieve precise knowledge of right and wrong was far more unlikely in ethics than in any other sphere of inquiry. Aristotle thought ethical knowledge was dependent on custom, tradition and social habits in ways that made it distinctive.

Only much later did John Locke, strike in a new direction with his determination to establish a “science of ethics.” He went astray in his search but, as we shall see, this was to be picked up again by neuroscientists hundreds of years later. David Hume, a philosophical contemporary then went on to assume that empathy should guide moral decisions and our ultimate ideals.

John Stuart Mill in the mid 19th century advanced liberalism in part by advocating that following what is right would lead to an improvement of our lives. “Actions are right in proportion as they tend to promote happiness; wrong as they tend to produce the reverse of happiness,” Mill wrote.1 Admittedly many actions in this colonial era increased the well-being of some while inflicting suffering on others. “Wrong” often boiled down to selfishness while “right” encompassed willingness to take personal responsibility for considering the consequences that such actions might have for others.

Today “right” and “wrong” are generally assumed to have come from schooling, parental teaching and legal and religious instruction. However, primatologists like Marc D Hauser, a Harvard biologist, contend that the roots of human morality are also evident in such social animals as apes and monkeys who display feelings of reciprocity and empathy which are essential for group living. Hauser has built on this to propose that evolution wired survival among other social factors into our neural circuits.2 The swift decisions that had to be made in life-or-death situations were not accessible to the conscious mind. Hauser’s ultimate objective is to get morality accepted as being objectively true. This counters what most people in the northern hemisphere believe: that ethics are relative to time, cultures and individuals. Thus questions like gender, abortion, capital punishment and euthanasia waver in the winds of right and wrong.

The prolific Anglo-Irish writer, Brian Cleeve (1921-2003) asked: “Has the time arrived again when people must make moral standards a personal crusade? Has the time come to stand up and be counted for the difference between right and wrong?”3 Cleeve contended that “In our modern eagerness to be tolerant, we have come to tolerate things which no society can tolerate and remain healthy. In our understandable anxiety not to set ourselves up as judges, we have come to believe that all judgments are wrong. In our revulsion against hypocrisy and false morality, we have abandoned morality itself. And with modest hesitations but firm convictions I submit that this has not made us happier, but much unhappier.” In his book on 1938: A World Vanishing, he held that at that time the average man and woman in Britain “possessed a keen notion of what was right and what was wrong, in his and her own personal life, in the community, and in the word at large.”

The entry of neuroscientists, experimental psychologists, and social scientists into the search for understanding a possibly physical basis for such philosophical challenges as right and wrong has led to experiments with brain-scanning technology. The work of Harvard professor, Joshua Greene, has led him to conclude that “emotion and reason both play critical roles in moral judgment and that their respective influences have been widely misunderstood.”4 Greene’s “dual-process theory” posits that emotion and rationality enter into moral decision-making according to the circumstances. Using functional magnetic resonance imaging (fMRI) to examine specific areas of the brain as it functions: The flow of blood to the amygdala (the seat of emotions) is compared to the flow to the prefrontal cortex (which houses deliberative reasoning.) The results Green believes illustrate that even when humans are calculating abstract probabilities, they also may rely on emotions for guidance. “Reason by itself doesn’t have any ends, or goals,” Greene concludes. ”It can tell you what will happen if you do this or that, and whether or not A and B are consistent with each other. But it can’t make the decision for you.” Greene believes that by learning more about the neurological mechanisms involved in moral decision-making, people could eventually improve the way they make their judgments. Rationality cannot function independently of emotions, even in those who are utilitarian or rational decision makers.

Globally we have come to separate ethics and politics. No group can impose its moral conceptions on the society at large. Social media are powerful in creating herds of subscribers to groups with facades of universal values which mask narrow interests and replace ethics. Members need to be “right” in order to feel popular. The divisions between those who believe they are right sharply divides them from those perceived to be wrong. Most people want to be right as an indication of their intelligence, their power, their vision and ultimately of their desire for admiration and acknowledgment of their status. Like exhibitionist peacocks, some almost seem desperate to display their “superiority.” Our psychological make-up traditionally strengthens such positions. William Hazlitt wrote some 200 years ago that, “We are not satisfied to be right unless we can prove others to be quite wrong.”5

Some three generations ago Adolph Hitler insisted that “Success is the sole earthly judge of right and wrong.” I suspect there are contemporary leaders would might agree with such an extraordinary assumption. I feel that the requirements of a moral life are unlikely to be promoted by the current political leaderships. The sociologist Max Weber held that the ethic of responsibility in politics only could be resolved if we demand the minimal of internal and external danger for all concerned.5 I regret to say that this demand seems unlikely to be followed, but personally I believe that individual responsibility, which must entail a good measure of rationality, is absolutely essential if there is to be a reversal of the fast-fading social significance of human Rights and Wrongs.

1John Stuart Mill, Utilitarianism II, (1863)
2Marc D. Hauser, “Moral Minds” (2006)
3Brian Cleeve, “1938: A World Vanishing,“ (1982)
4Peter Saalfield, “The Biology of Right and Wrong” Harvard Magazine, January (2012)
5William Hazlitt, “Conversations of James Northcote,” 1830.
6Max Weber, “Politics as a Vocation,” Essays in Sociology, (1946) p.119


I was just starting thinking about the commercialization of death about a year ago when I was bowled over by a promotional leaflet sent to me by the UK’s Cooperative Society suggesting what fun death could be! (See above.) My instinctive reaction was: How inappropriate can you get? In my experience, death has brought an end to the fun we could expect to have from life. Indeed, the lyrics of the ‘Grateful Dead’ have not been ringing in my ears.

Now, a year later, I find a long article starting on the front page of the New York Times “Celebrating at his own wake.” Reported in detail by a correspondent, she describes how a fatally-ill, former priest, John Shields, carefully planned his last hours before having a lethal injection administered by his doctor. What he wanted was a wonderfully boozy Irish wake celebrated by some two dozen friends while he was still alive.1

At his ultimate party, Shields’ friends proclaimed their love, gratitude and admiration for their host. Without the increasingly invasive promotional efforts of the funeral trade, the small group expressed their thanks for his friendship and his courage. When one of them planted a kiss on his lips, Shields aroused much laughter when he quipped, “I was just thinking. ‘I’d like more of that.’ then thought, ‘That’s not a good idea.’”

Towards the end of his own wake, Shields had wanted to join in the singing of the verses of a special departure song with the classic Celtic folk lyrics of “The Parting Glass”

But since it falls, unto my lot,
That I should rise and you should not,
I’ll gently rise and I’ll softly call,
Good night and joy be with you all.

However, a tired and sick Shields was drifting off to sleep, and later managed to wave to his friends as he was wheeled out of the party smiling and telling everyone, “I’ll see you later.”

Perhaps our social attitudes are changing. Maybe it is time to put fun back into our funerals? Could we turn the wake into the party of a lifetime? Promoters suggest a special day themed for the ancient Scots, or New Orleans Jazz, or even a Dadaist celebration? The happy ending of such a wake would be one way to evade our fears of the unknown. Evasion, as well as denial, have been the classic psychological ways of cheering up gatherings which are overcome with grief. As George Bernard Shaw declared: “Life does not cease to be funny when people die any more than it ceases to be serious when people laugh.”

At my own father’s rushed funeral in an over-populated Italian cemetery on the outskirts of Rome (where myself and my father’s assistant were the sole mourners on a hot July afternoon) I was informed by a brusque Mafia undertaker that my father was being buried in an all-male section of the cemetery. This was because the law forbade any intermingling of the sexes underground! I almost felt sorry for my departed father, but was certain that even in these circumstances he would find ways to circumvent such mafia-bound, Catholic driven restrictions.

“Not everyone will be in a condition to toast Death’s imminence with champagne, as Anton Chekov did,” wrote The Economist in a recent cover story on “How to have a better death.”2 Perhaps our social attitudes are advancing? Perhaps we are beginning to accept that birth-life-death is a unity to be celebrated? The traditional weak jocularities of funeral orations do appear to be gradually vanishing.

Writing for The Guardian, a part-time observer noted that “Just as weddings have gorged themselves into inflated self-promotion, so funerals are now doing he same. They are becoming extravagant forms of self-expression, designed to articulate our individuality.”3 Certainly the burial costs, not including the catering fees of a good wake, are soaring. Being buried in London’s Highgate cemetery (along with Karl Marx and other celebrities) will cost more than £18,000 (over $20,000). This reminds me of the marvelous observation of Woody Allen, “My grandfather had a wonderful funeral… It was a catered funeral. It was held in a big hall with accordion players. On the buffet table there was a replica of the deceased in potato salad.”4

Funerals were not always as somber as those of the Middle Ages or even of the 19th century. In the time of Homer, for example, the Greek funeral was a three act drama. The body was laid out in the first act, the transport to internment was the second act and the third was the lowering of the body or the ashes into the grave. This scenario presented opportunity for the display of family pride, wealth, solidarity and power.5 However, in those days there was a closer intimacy between the living and the dead. Homer described the dead as “ghosts of worn out mortals.” The dead had to be fitted with their obol, or boat fare, fixed between their teeth. This was a payment for being ferried across the river Styx by Charon, the boatman.6 It was also customary to place a laurel crown on the head of those deceased who had “bravely fought their contest with life.”

The classic Greek ceremony around the grave featured the singing of ritualized lamentations. Sometimes hired mourners dressed in long robes also participated. A chorus of women traditionally uttered a refrain of cries to accompany the sung lamentations. At the end of such burials the women left first to go to the house of the deceased to put the finishing touches on the banquet. However, it was Christianity that truly promoted the belief in life after death which had merely been hinted at by the Greeks.

Of all the global ceremonies surrounding death, none can surpass the creative ways Mexicans celebrate rather than mourn the departed. The Mexican “Day of the Dead” originated with the Aztecs, who before the landing of Columbus, had for centuries spent 30 days every August dedicated to death. The invading Spanish, when introducing Christianity, contracted these lengthy festivities into one day around the All Saints’ and All Souls’ days in November. Today, El Dia de los Muertos continues to be a national celebration to honor those who have passed away.

Gravesites are decorated with flowers, angelitos (little papier-mâché angels) balloons and small altars decorated with candles, memorabilia, photos, as well as food in honor of the dead. The same happens at home where those who have died can be reassured that they have not been forgotten and can enjoy a welcome homecoming. All of this is fun. The family may gather at the gravesites of their loved ones and enjoy a picnic in the presence of the departed. Some may play guitars, sing and even dance. The celebrations can continue with an all-night candlelight vigils where good times will be recalled and toasted with a drink or two.

The tragedy of the shortness of life is tempered not only by sorrow but also by pathos and extraordinary creativity. The pan de muertos (Day of the Dead bread) is a loaf sprinkled with cinnamon and decorated with “bones” especially baked for the occasion. Sugar candy in the shape of skulls and bones are also common. For the family it may be a way of saying “We cheated death because we are now eating you!” More serious papier-mâché skulls and skeletons, as well as clay, wood and plastic representations of the dead come in different sizes and are even esteemed for their artistic craftsmanship. I have collected a small but charming group of such Mexican memento to the dead.

These Mexican celebrations are untainted by the promotional intrusions of large corporations. Exploiting loss for commercial gain still seems most inappropriate to many. Inevitably, death in the capitalist world sells these days: Virgin Holidays suggests flying your way out of grief. Indeed, travel therapy may offer a faster escape from sorrow than some contemporary form of “sociotherapy.” I do recommend drawing on the profoundly celebratory aspects of the Mexicans. As The Economist concluded in its cover story: “A better death means a better life, right until the end.”

1Catherine Porter, “Celebrating at his own wake,” The New York Times, May 29, 2017.
2April 29, 2017.
3Giles Fraser, “The rise of so-called happy funerals…” The Guardian, May 12, 2017
4The Nightclub Years
5Robert Garland, The Greek Way of Death, (1985) p.23
6Yorick Blumenfeld, The Waters of Forgetfulness, (2006)

Rodin’s 100th and the path of his successors

I have had the good fortune of being able to see the impressive exhibition in Paris commemorating the Centenary of the passing of the great Auguste Rodin at the Grand Palais. It is but one of the many exhibitions of his marble and bronze sculptures to be shown around the world. This one, however, went one step further by also presenting a large number of sculptures created by those who purportedly followed in his foot-steps.

The beauty of Rodin’s early works is most heartening. For the beauty of the human body was central to what Rodin achieved and tried to perfect. It must be remembered that one of his first acclaimed works in marble was that of an expertly presented naked man which was first acclaimed by the art academicians in Paris and then swiftly rejected because they claimed he must have cheated by making a cast from the body of the model. Infuriated, Rodin swiftly made another marble of the model a half size larger and just as perfect. This time the experts had to accept the artist’s remarkable talent and Rodin’s fame as the greatest sculptor since Michelangelo was in the making.

Beauty was at the center of Rodin’s work. “To tell the truth, every human type, every race has its beauty. The thing is to discover it,” he told his friend Paul Gsell.1 “Beauty is everywhere. It is not she that is lacking to our eye, but our eyes which fail to perceive her.” Character and expression, he claimed, were at the basis of beauty. “There is nothing ugly in art except that which is without character, that is to say, that which offers no outer or inner truth.”

Following the traditions of the greatest Greek sculptors, Rodin said that “The artists in those days had eyes to see, while those of today are blind; that is all the difference. The Greek women were beautiful, but their beauty lived above all in the minds of the sculptors who carved them.” These pronouncements by the master were in my mind when the show in the Grand Palais shifted to the marbles and bronzes of his followers such as: Cesar Baldaccini, Germaine Richier, and Barry Flanagan. What struck me with the large selection of these works was that they no longer were concerned with beauty. They were meant to impress by their horror, brute power, vacuity, the existential pain of being human, and even our humor.

In his later period Rodin became more experimental, trying to catch the dynamics of movement in his sculptures. His statue of a dancing Nijinski climaxed this period. Rodin also started to focus on fragments of the human body before assembling such parts. The way he studied the power and effectiveness of the human hand, and he collected thousands of plaster hands, was most revealing. No sculptor ever focused as acutely on hands and feet as Rodin. Altogether his later more random approach to the presentation of the human form was to be a prelude to sculpture in the 20th century.

Rodin, however, held a deep respect for the materials he worked in, beginning with clay and progressing with plaster and then bronze or marble. Many of his successors mistreated their materials- ripping, stretching, distorting, or compacting the forms. The results ultimately proved provocative but were unrelated, in fact opposed, to the classical school. I found the comparison between Rodin’s famous “The Thinker” and Georg Baselitz’s interpretation of this masterpiece in a huge brutalized and primitively carved “Zero” most painful. Perhaps it was intended to emphasize the collapse of our
humanity following the horrors of World Wars I and II.

Claudia Schmuckli, the curator in charge of Contemporary Art and Programming for the Fine Arts Museums in San Francisco, who has put together a large collection of Rodins at the Legion of Honor Museum, said that “Rodin’s naturalist conception of the body and his embrace of the fragment as a motif in its own right deeply influenced the trajectory of modern sculpture.” Then she announced being thrilled that Sarah Lucas and Urs Fisher had “agreed to consider their work in this context and bring a contemporary perspective to our understanding of Rodin’s work and legacy.”

Now it must be conceded that Lucas and Rodin both had powerful sexual drives but when it came to transferring these into a solid, like bronze, marble or wood, Lucas descended into creating inflatable plastics, immortalized by the huge and hideous yellow plastic penis she produced for the her show where she was representing the UK at the Venice Biennale in 2015. Lucas also plaster cast her bottom half and then later on inserted a cigarette poking out of her now inflatable plastic vagina. To be fair, she also cast the penis of her boyfriend, the composer Julian Simmons, over and over again to make a series called “Penetralia.”2 What I have seen of her attempts at sculpture are unsavory perversions of what art, such as Rodin created, can achieve.3 Lucas may have a sense of humor but her lack of talent, in my mind, blocks any imaginable connection to Auguste Rodin.

Rodin was followed by such great sculptors as Arp, Archipenko, Boccioni, Duchamp-Villon, and Zadkine — all of whose works integrated their powerful artistic forms of expression with their own individual character. All of these sculptors were concerned with the beauty of their creations, much like Rodin, but today “beauty” is generally dismissed as a standard.

Today’s eager art lovers use the hashtag #Rodin100 just to keep track of the host of art museums large and small around the world, with the Rodin Museum in Paris at the center, all of whom are or will be celebrating the works of the greatest of 19th century sculptors. In turn, I find it hard to imagine what standards the 21st century sculptors will produce?

1 Auguste Rodin, Rodin on Art and Artists (with conversations with Paul Gsell). (1983), p. 20
2 Charlotte Higgins, “Sarah Lucas: ‘I have several penises, actually’” The Guardian, May 6, 2015
3 Also to be shown at the American Legion Museum will be “Concrete Boots,” “Nice Tits” and “Hoolian” by Sarah Lucas, one of the over-celebrated ‘Young British Artists.’


Should the market and the continuing advances in science and technology be the ultimate arbiters of where we are headed? Neither are experiencing controls, and politicians are most reluctant to intervene in the innovations in robotics or the internet. As a writer, the internet has proven to be both a great assistant and a serious enemy: It distracts me from concentrated attention, steals my time and space to think, degrades my memory, and tends to attack my eyes, my spinal column and even my social life. I know I am not alone in these observations. I have not joined Facebook nor do I spend my nights tweeting, like the US President, but the younger generation will simply say that I am out of touch. I counter this by pointing out that technology is undermining bookshops, printed newspapers and human touch.

So where are we headed? Do we really want to transform human nature so that in the 21st century consciousness will be uncoupled from intelligence? Yuval Noah Harari, the popular new writer/philosopher, suggests three more mundane developments in the 21st Century which are likely to overwhelm our human experience on this planet:

  1. Humans will lose their economic and military usefulness. This will lower their value in economic and political terms.
  2. The human collective will retain its values, but not unique individuals.
  3. A new elite of upgraded humans will arise.1

Harari suggests that “The most important question in 21st Century economics may well be what to do with all the superfluous people?” Contending that humans have both physical and cognitive abilities, he points out that taxi drivers are likely to go the way horses did during the Industrial Revolution. He asks, “What will happen once algorithms outperform us in remembering, analyzing and recognizing patterns?” I tend to agree with him that in the dystopian world which may be facing us, real jobs and full-time employment will be reserved for an educated, technology literate elite. The new wave of top corporations such as Amazon, Apple, Facebook, Google and Microsoft simply are not mass employers like Ford, General Electric, GM or Kodak used to be.

The progression of humans on this earth, from tilling the soil in 5000 BC to toiling in an Amazon warehouse, is not always obvious. Early in the 20th Century, Frederick Taylor in his celebrated book, Principles of Scientific Management, regarded workers as cogs in the industrial mass production machine. A century later we are asking why turn workers into machines when robots can do their jobs at a lower cost? Technology has produced ever more efficient ways of monitoring human capabilities and comparing these with the costs and greater profits from robots. Alas, money and profits in the capitalist system are becoming more important than human labor.

Some seventy millennia ago the improved capacity of the Homo sapiens mind started the revolution in which the DNA of one living species was able to dominate the planet. Now a second revolution may be on hand in which the scientific and technological advances of artificial intelligence will triumph over the genetic. Indeed such progress will succeed because of the collaboration between people and algorithms suggests Demis Hassabis the co-founder and CEO of DeepMind. He stated that “If we want computers to discover new knowledge, then we must give them the ability to truly learn for themselves.”2 Please note the personification of the computers!

Harrari adds that “high-tech gurus and Silicon Valley prophets are creating a new universal narrative that legitimizes the authority of algorithms and Big Data.” Just as free-market capitalists believe in the invisible hand of the market so Dataists believe in the invisible hand of the data flow. As the global data processing system becomes all-knowing and all powerful, so connecting to the system will become the source of all meaning. I hesitatingly accept Harrari’s proposal that “We are already becoming tiny chips inside a giant system that nobody really understands.”3

We are now at the stage of accepting that neurons, genes and hormones all obey the same physical and chemical laws of life on earth. However, it will take transcranial stimulators to enable us to decode the electrochemical brain process which determine our perspective, because the two separate brain hemispheres are not always in touch with each other. It is the left hemisphere which is the seat of our verbal abilities including our power to interpret the information that makes sense of our thoughts and experiences. it controls the right hand. The right side is more creative and is crucial in the areas of music, imagination, and intention as well as control of the left hand.

I suspect that ultimately spending untold billions on exploring the brain might be more productive than trillions invested in space exploration. The motivation which underpins the competitive advance of this new technology is in large measure an economic one, as evidenced by the market for shares in high tech. Of course there is also the drive of scientists rushing to publish their pioneering breakthroughs and getting these patented. The growth of technology in many ways resembles that of the market. The market is as blind as it is invisible. However, supply and demand cannot guide all of society. Neither can technology. If everything was determined by the market, the courts, the police, and the army would vanish. So would the entire economy. Mark O’Connell, who had studied this proposition, recognized that growth was mediated by corporations whose real interest was to make eventual profits out of reducing human life into data.4

The efforts of a future in which human minds might be uploaded to computers, is one aspect of Carbon Copies, a “nonprofit organization with a goal of advancing the reverse engineering of neural tissue and complete brains …creating what we call Substrate Independent Minds.” This non-profit group is funded by a number of adventurous millionaire investors who are seeking scientists who work “towards quantum leap discoveries that might rewrite the operating systems of life.”

Somehow I feel human cognition is demeaned when we reduce it to mechanic operations and along computational lines. The internet is proving to be the single most powerful mind-affecting technology ever. As it is the overwhelming flood of new data is extraordinarily disruptive. Many acquaintances suffer from neural addiction to Facebook, Twitter, the latest news and stock market results on top of the steady flow of emails. Studies have shown that cognitive losses from multi-tasking are higher than the cognitive losses from smoking pot. Aided by our smart phones and computers, we are able to multi-task. Apps on our smart phones serve as a calendar, a watch, voice recorder, alarm clock, GPS, camera, flashlight and news headliner. However, there is a cognitive cost for every time we are rapidly switching from one task to the next.5

Surveys show that almost a third of every working day is lost to keeping up with the information flow. The impact on the brain is barely understood and nobody knows how it will affect us socially. What seems certain is that it will transform our existence as homo sapiens has thusfar experienced it. Attention deficit disorders are affecting more and more children. Part of this is ascribed to the swift sequencing of images on the internet. The result is that 3 seconds is about as much time as will hold the attention of kids. How will this affect them in later years?

The universal change of pace already has had extraordinary effects in terms of consumption, obsolescence, renewal, inequality and lots of other conditions. I don’t believe the brain was built for the swift and continuing change that we are currently experiencing. The brain is adaptable and can accommodate small changes here and there, but not the continuity of alterations which are changing the face of the earth, employment, wages, round-the-clock news, ringing mobiles, blogs, and communications. Cyberspace has invaded our public and private lives, our economy and our security as well. While everything is changing, politicians have not appreciated nor understood the social revolution taking place. Few can accept the fundamental and rapid shifts in power. Currently there is no comprehension of who and how would control the new constructs as these arose. AI is certainly going to transform the lives of architects, lawyers and medical professionals. Indeed, it threatens to overwhelm us all. Because we have no idea what the job market will be in 2030 or 2040, we have few notions of what to teach our kids today.

Such realities are far from what may come next: The founder of the 2045 Initiative, Dmitry Itskov, a Russian high tech multimillionaire operating in Silicon Valley, wants “to create technologies enabling the transfer of an individual’s personality to a more advanced nonbiological carrier and extending life, including to the point of immortality.” One of the projects of the 2045 initiative is to create artificial humanoid bodies that would be controlled through brain-computer interface.

A conference in New York by Global Futures 2045 was focused on “a new evolutionary strategy for humanity.” The organizer, Randall Koene, a “trans-humanist,” sees the mind as a piece of software, an application running on the platform of human flesh. The complex transformation starts with the scanning of the pertinent information stored in the neurons of a person’s brain. Although incredibly complicated because of the seemingly endless connections between the neurons, the scan becomes a blueprint for the reconstruction of neural networks which are then transformed into a computational model.” Ultimately this would allow scientists to create any material form which technology permits. The human could choose to become large or small, with feet or with wings, like a tiger or a tree. The prospects may challenge the human imagination, but such projections of AI advances overfill me with forebodings of ultimate horror.

Ultimately, it is the arts that may become our human sanctuary when AI and robots will have replaced teachers, doctors, lawyers and policemen. Creating new jobs will not be the challenge, it will be creating ones where humans can outperform robots. The world we want will be one advancing direct experience, such as all the arts: music, dance, singing, painting, sculpting, writing , and acting . It would also endorse all the sports, running, swimming, , hiking, climbing, walking, and exercising as well as cooking, gardening, keeping pets, caring, loving, and travelling . The joys of all these activities will go far beyond the speculations of Alan Turing and his successors on the connections between randomness and creative intelligence. There is an urgent need for a re-evaluation of our relationship with the wonders of the new technology. *

Currently there is a widespread belief that the advances of technology, the internet and science are both unstoppable and to a large extent, desirable. Silicon Valley’s most prominent figures hold self-serving views that anything which slows scientific innovation is an attack on the public good.6

I liked Rutger Bregman’s outlook in, Utopia for Realists. This young Dutchman suggests that we can construct a society with visionary ideas that could be implemented, like the plans for a universal basic income. As an aging Utopian,
I have always endorsed building castles in the sky. Shocking ideas which are usually rejected out of hand, often return to become popular and even accepted. The questions of ethics in a world that will be so different are daunting. Optimistically, crises- real or perceived- can spark genuine change. Sometimes this can be mind-blowing: As Harari cautioned, human nature is likely to be transformed in the 21st Century because intelligence is uncoupling from consciousness. The countering encouragement he provides is that ultimately” It is our free will that imbues the universe with meaning.”7

1Yuval Noah Harari, Homo Deus, (2016)p.356
2Demis Hassabis, “The Mind in the Machine,” The Financial Times Magazine, April 22, 2017
3Yuval Noah Harari, “In Big Data We Trust,” The FT Magazine,August 27, 2016, p.14
4Mark O’Connell, “Goodbye Body, Hello Posthuman Machine,” The Observer, March 26, 2017
5Daniel J. Levitin, “Why the Modern World Is Bad for Your Brain,” The Observer, January 18, 2015.
6“Computer Security,” The Economist, April 8, 2017, p.75
7Homo Deus, op.cit

* Regulating the internet would require a change in the political mindset in both Europe and the United States. The invasions of privacy and security as well as the massive tax evasion by the largest internet companies have not sufficed to bring about the essential changes. The two prime decisions made by the creates of the internet and principally by Tim Berners-Lee were that there would be no central control or ownership and that the network could not be dominated by any particular application.