THE CHALLENGE OF CHANGING PERSPECTIVES


For those of you who accidentally received a first draft last week of this blog, this one is quite different in its overall perspective. You may be amused by the radical changes made.


As a member of the older generation, the changes I continue facing in everyday-life are historically unprecedented, wide-ranging, and in many ways controversial. A number are difficult to handle or to tolerate for many different age levels.

I find the continuing acceleration in the speed of change in life disturbing. Everybody is “busy” most of the time. We race from one place to another, spend too much time in traffic jams, rush through what we have to read, see on television and follow on our computers. Meals are cut short and Victorian style afternoon teas are no longer in fashion — they are too time consuming.

Through the dynamism of both technology and finance, we have changed not only the pace of life but also have altered its quality and direction. Money (that is, profits) has been the driving force of capitalism but almost no attention has been given to the effects on human beings which follow most innovations. In my last blog I focused on the unknown impact of iPads and tablets on infants. That was not the occasion to examine the possible impact of computers, mobiles and automation on adults.

What first comes to mind is what I am doing right now! The hours spent everyday on my computer are bad for my back, my eyes, my hands and my spirits. I still love writing with pen or pencil and find these wonderful, but slow and I, too, am often in a hurry. I am not on Facebook or the other social networks because they would intrude into my moments of leisure, time in the garden, or time to reflect.

So where can we take the currently uncontrolled and unplanned advances of technology which are popularly assumed may end with Artificial Intelligence? How to test the effects of automation on human beings as well as on entire societies? It is evident that as long as money/profits remain the prime driving force, there is little possibility of controlling the advance of untested but desirable technology-driven innovations for our brains and mental states.

Let me suggest that the pharmaceutical industry is a good example of what the Silicon valley giants could try to copy: In most countries almost all new medicines have to pass a variety of rigorous tests for their suitability on patients. If this difficult as well as bureaucratic program works effectively for protecting our physical health, why could different tests not be applied for the mental well-being of those subjected to electronically stimulated waves — ranging from head-sets to our everyday iPhones? We have little idea at the moment to what we are subjecting our brains (and hearts) and what the possibilities of damage there may be from many electronic devices.

On a broader perspective, some of the impacts of the new technology on the younger generation are evident: many no longer communicate in writing on paper and tend to stick to minimalism when it comes to expressing themselves. They even don’t like to use the telephone, regarding it as a medium of old-timers. I have been advised by a son that he no longer reads any email which extends beyond two terse paragraphs. As a writer, I find all of this poses cultural challenges which we could perhaps correct in schools and universities over time.

As a writer and former journalist, I am most disturbed by the newly popularized crisis of faith in journalism. The masses like to get the instant flow of events from Twitter and the online news organizations. What with the perverters of the truth, like Murdoch’s Sun newspaper in the UK and Fox News in the USA, the press increasingly gives readers the scandals they want rather than informing them of the events which might increase their knowledge or might be useful. For that matter, I have to confess that getting the Trump scenarios out of my mind is becoming an everyday challenge.

Even much of our economics are becoming unfathomable: Bit-Coins with their digital crypto-currencies make no sense. It seems that they are new instruments for gamblers, tax evaders, and high-tech risk takers rather than money to be used every day. Controls by governments of QE (Quantitative Easing) in which billions upon billions of dollars, pounds and other currencies have been pumped into bank reserves also seem most dubious. The whole QE process comes straight out of wonderland and tends to confuse minds, even in government, about reality.

I must balance these deep concerns with my expression of positive advances in so many areas. I am most enthusiastic about the giant greenhouses being based on the Eden Project in Cornwall. The co-founder, Sir Tim Smith, wants “to create oases of change… our job is to create a fever of excitement about the world that is ours to make better.” His group is now planning the construction of giant green-house domes in China, Australia and New Zealand.

I find the GPS of finding one’s way around the world as directed from outer space is a marvelous technological breakthrough, much as it may do away with our former ability to read maps. This is a variation of the impact that the technologies have on our abilities. When kids in schools some fifty years ago were given simple hand held adding machines, they quickly forgot how to do their sums.

The miracle cures for cancer exploiting the powers of genetics and our human immune systems are to be lauded. The related advances in gene editing techniques are promising extraordinary solutions to many of our genetically based illnesses. However, as with medicines, we should try to advance more carefully with intense examinations of the possible consequences rather than triumphantly announcing breakthroughs. The moral challenges we face with the introduction of gene editing must be dealt with enormous care and consideration. Our perspective of how to protect our minds after all these millennia of change and development must not be corrupted by the lure of money nor even by the competitive egos of leading scientists.1

Governments around the world are now planning to ban all diesel and petrol vehicles over the next 25+ years because the rising levels of nitrogen oxide present a major threat to public health as well as to climate change. If governments can do this on a cooperative basis, why can they not start research on whether the electronic products of ‘Silicon Valley” are affecting the mental and asocial imbalances of the population?

Thankfully, there are numerous aspects of our evolving cultures, like the above, which are greatly encouraging. I think it is most important to focus on these to bring greater hope to millions of people who have become deeply discouraged by the universal focus on capitalist competition, celebrity, and terrorism in this new millennium. I am advocating that the wonders of being alive on this incredible planet truly should be the basis for much of future optimism in the next generations.


1Yorick Blumenfeld, Towards the Millennium, (1996) pp. 421-428

Advertisements

iPADS for INFANTS

I was rattled in a restaurant recently watching a couple encouraging their year-and-a-half toddler to slide a finger across his iPad. The little one was excited to see the changes on the screen. A few weeks later I observed a two-year-old grabbing his four-year-old brother’s iPad and operating it vengefully! For a few of the “advanced” members of this age group their first word is not “Mom” or “Dad” but “Pad.” Some toddlers even have become addicted to these electronic wonders! What is happening is that children are being subjected to unknown and untested challenges to their personal development.

There are numerous videos on YouTube of these little ones sliding their fingers across the pages of magazines lying on the kitchen table in an effort to activate them. Parents may wonder how the tablets affect their young ones, but most are pleased that quiet reigns in the house and they rationalize that even at this early stage of life their offspring are learning how to focus and develop their attention spans. However, some mothers and fathers are so fearful of the possible consequences that they have chosen to deny their kids access to these technological marvels.

“There is something important going on here and we need to learn what effects this is having on learning and attention, memory and social development,” says Jordy Kaufman, the director of BabyLab, one of the rare groups researching in this area, which is under the auspices of Australia’s Swinburne University.1 His team is trying to learn how the iPads and tablets affect the long-term mental development of the very young. His BabyLab is using innovative approaches to explore the cognitive as well social aspects of brain development in the very young.

The techniques at BabyLab include behavioral eye tracking, which measures observable changes in development, for example whether babies have a preference for faces over objects, as well as electro-physiological methods, which track changes that occur in brain activity when resting or responding to iPads. Toddlers do detect subtle changes. When they see something happening on the screen, like a change of color, an object in motion, or a face. The youngsters may empathize with what they observe. Their instant reaction is: “Is that me? Is that another?” For an instant they relate because that’s the way their brain is wired. Some of the very young may believe that the iPad is alive, but most intuitively accept that it is not.

While in depth studies have been made of the effects of television on the younger generation, very little research has been made on the effect of tablets on those in kindergartens. Indeed there may be benefits for the very young in developing motor skills as they learn to push buttons and softly slide their fingers. Their exposure to tablets may give them a kick-start to learning. However, Kaufman cautioned that “There is a school of thought that tablet use is rewiring children’s brains, so to speak, to make it difficult for them to attend to slower-paced information.”

Denying children access to iPads entails risks, contends Rose Flewitt who is doing research at the Institute for Education at the University of London. She studies how iPads can help literacy at the nursery and primary levels. “Having one section of society that is growing up with skills and one section that is growing up without it,” is problematic she posits. On the other hand, tablets and iPads do nothing to foster social skills for the very young.

The immediate response to pushing a button is highly satisfying and pleasurable for children who delight in the lights, images and sounds that emerge. There also are no admonishments coming from the iPads as well as a lack of any positive feedback. The electronic instruments are fast, dependable and soon become familiar. However, the cold glass, plastic and metal of the casings of the tablets provide only limited sensory experiences for the very young. There is none of the comfort provided by the traditional cuddlys and stuffies. The experts wonder whether a profound shift in childhood mindset may be taking place here without our understanding. It is appalling how little is known about the effects of the rapid and continuing educational technology advances these children now experience.

What is certain is that many of the new generations get hooked on the irresistibility of the swift educational technology advances. By the time they are teenagers they are likely to spend close to eight hours a day using electronics like computers, TV sets, smart-phones and iPads as most American 13-year-olds do today. However, I shall not wander into the more advanced levels of the $100 billion educational technology industry (which here encompasses the combined European and North American educational technology markets) which experiences continual development driven less by the needs of students and teachers than by the profit motive.

Over the past three decades we have seen that computers have been used to improve efficiency in the classrooms and keep pupils engaged, but they have not transformed learning in the way the promoters had predicted. It is basically unknown whether educational technology is advancing the potential of the new generation. The Economist contends that there has been a succession of inventions promising to overhaul education, but these have not done so yet. There has been little difference between the money spent on IT in schools and the abilities of 15-year-olds in maths, science and reading who have not received it.2

It seems evident to me that what is happening electronically at the early stages in the lives of children is now one of the basic aspects challenging their overall mental development. We simply don’t know how abandoning the reading of books, listening to stories, and other aspects of traditional education will affect future generations. Artificially personalized machine contacts are unlikely to match the look of the human eyes, the sound of a genuine voice, the scent of the adult, the warmth and familiarity of touch — all of which exert a personal impact whose combined effects on the psyche cannot be over-estimated. I believe we are putting our culture and entire civilization at great risk if we allow “the technological” to overwhelm “the human” during the introduction of the new generations into this world.


1Paula Cocozza, “Children of the Revolution,” The Guardian, January 9, 2014

2”Machine Learning,” The Economist, July 22, 2017, p.18

NEW DIRECTIONS FOR RIGHT AND WRONG?

Even before this era of “fake news” and the easy willingness to mix lies and truth, I already was deeply concerned about the swift decline in our belief in ethical rights and wrongs. I accept that we may find it increasingly difficult, given the distractions of social media, to live by our traditional ethical guidelines. However, I feel strongly for the universal need to accept the principles of right and wrong which resonate within us.

Historically, morals, affiliations, and religions have all been dependent on strongly held convictions in right and wrong. Philosophers, beginning with Socrates (469-399BC), have long debated the foundations of moral decision-making. Socrates was one of the earliest of the Greek philosophers to focus on self-knowledge in such matters as right and wrong. He advanced the notion that human beings would naturally do good if they could rationally distinguish right from wrong. It followed that bad or evil actions were the consequence of ignorance. The assumption was that those who know what is right automatically do it. Socrates held that the truly wise would know what is right, do what is good, and enjoy the result. However his most famous pupil, Aristotle, held that to achieve precise knowledge of right and wrong was far more unlikely in ethics than in any other sphere of inquiry. Aristotle thought ethical knowledge was dependent on custom, tradition and social habits in ways that made it distinctive.

Only much later did John Locke, strike in a new direction with his determination to establish a “science of ethics.” He went astray in his search but, as we shall see, this was to be picked up again by neuroscientists hundreds of years later. David Hume, a philosophical contemporary then went on to assume that empathy should guide moral decisions and our ultimate ideals.

John Stuart Mill in the mid 19th century advanced liberalism in part by advocating that following what is right would lead to an improvement of our lives. “Actions are right in proportion as they tend to promote happiness; wrong as they tend to produce the reverse of happiness,” Mill wrote.1 Admittedly many actions in this colonial era increased the well-being of some while inflicting suffering on others. “Wrong” often boiled down to selfishness while “right” encompassed willingness to take personal responsibility for considering the consequences that such actions might have for others.

Today “right” and “wrong” are generally assumed to have come from schooling, parental teaching and legal and religious instruction. However, primatologists like Marc D Hauser, a Harvard biologist, contend that the roots of human morality are also evident in such social animals as apes and monkeys who display feelings of reciprocity and empathy which are essential for group living. Hauser has built on this to propose that evolution wired survival among other social factors into our neural circuits.2 The swift decisions that had to be made in life-or-death situations were not accessible to the conscious mind. Hauser’s ultimate objective is to get morality accepted as being objectively true. This counters what most people in the northern hemisphere believe: that ethics are relative to time, cultures and individuals. Thus questions like gender, abortion, capital punishment and euthanasia waver in the winds of right and wrong.

The prolific Anglo-Irish writer, Brian Cleeve (1921-2003) asked: “Has the time arrived again when people must make moral standards a personal crusade? Has the time come to stand up and be counted for the difference between right and wrong?”3 Cleeve contended that “In our modern eagerness to be tolerant, we have come to tolerate things which no society can tolerate and remain healthy. In our understandable anxiety not to set ourselves up as judges, we have come to believe that all judgments are wrong. In our revulsion against hypocrisy and false morality, we have abandoned morality itself. And with modest hesitations but firm convictions I submit that this has not made us happier, but much unhappier.” In his book on 1938: A World Vanishing, he held that at that time the average man and woman in Britain “possessed a keen notion of what was right and what was wrong, in his and her own personal life, in the community, and in the word at large.”

The entry of neuroscientists, experimental psychologists, and social scientists into the search for understanding a possibly physical basis for such philosophical challenges as right and wrong has led to experiments with brain-scanning technology. The work of Harvard professor, Joshua Greene, has led him to conclude that “emotion and reason both play critical roles in moral judgment and that their respective influences have been widely misunderstood.”4 Greene’s “dual-process theory” posits that emotion and rationality enter into moral decision-making according to the circumstances. Using functional magnetic resonance imaging (fMRI) to examine specific areas of the brain as it functions: The flow of blood to the amygdala (the seat of emotions) is compared to the flow to the prefrontal cortex (which houses deliberative reasoning.) The results Green believes illustrate that even when humans are calculating abstract probabilities, they also may rely on emotions for guidance. “Reason by itself doesn’t have any ends, or goals,” Greene concludes. ”It can tell you what will happen if you do this or that, and whether or not A and B are consistent with each other. But it can’t make the decision for you.” Greene believes that by learning more about the neurological mechanisms involved in moral decision-making, people could eventually improve the way they make their judgments. Rationality cannot function independently of emotions, even in those who are utilitarian or rational decision makers.

Globally we have come to separate ethics and politics. No group can impose its moral conceptions on the society at large. Social media are powerful in creating herds of subscribers to groups with facades of universal values which mask narrow interests and replace ethics. Members need to be “right” in order to feel popular. The divisions between those who believe they are right sharply divides them from those perceived to be wrong. Most people want to be right as an indication of their intelligence, their power, their vision and ultimately of their desire for admiration and acknowledgment of their status. Like exhibitionist peacocks, some almost seem desperate to display their “superiority.” Our psychological make-up traditionally strengthens such positions. William Hazlitt wrote some 200 years ago that, “We are not satisfied to be right unless we can prove others to be quite wrong.”5

Some three generations ago Adolph Hitler insisted that “Success is the sole earthly judge of right and wrong.” I suspect there are contemporary leaders would might agree with such an extraordinary assumption. I feel that the requirements of a moral life are unlikely to be promoted by the current political leaderships. The sociologist Max Weber held that the ethic of responsibility in politics only could be resolved if we demand the minimal of internal and external danger for all concerned.5 I regret to say that this demand seems unlikely to be followed, but personally I believe that individual responsibility, which must entail a good measure of rationality, is absolutely essential if there is to be a reversal of the fast-fading social significance of human Rights and Wrongs.


1John Stuart Mill, Utilitarianism II, (1863)
2Marc D. Hauser, “Moral Minds” (2006)
3Brian Cleeve, “1938: A World Vanishing,“ (1982)
4Peter Saalfield, “The Biology of Right and Wrong” Harvard Magazine, January (2012)
5William Hazlitt, “Conversations of James Northcote,” 1830.
6Max Weber, “Politics as a Vocation,” Essays in Sociology, (1946) p.119

THE PROMOTIONS OF DEATH

I was just starting thinking about the commercialization of death about a year ago when I was bowled over by a promotional leaflet sent to me by the UK’s Cooperative Society suggesting what fun death could be! (See above.) My instinctive reaction was: How inappropriate can you get? In my experience, death has brought an end to the fun we could expect to have from life. Indeed, the lyrics of the ‘Grateful Dead’ have not been ringing in my ears.

Now, a year later, I find a long article starting on the front page of the New York Times “Celebrating at his own wake.” Reported in detail by a correspondent, she describes how a fatally-ill, former priest, John Shields, carefully planned his last hours before having a lethal injection administered by his doctor. What he wanted was a wonderfully boozy Irish wake celebrated by some two dozen friends while he was still alive.1

At his ultimate party, Shields’ friends proclaimed their love, gratitude and admiration for their host. Without the increasingly invasive promotional efforts of the funeral trade, the small group expressed their thanks for his friendship and his courage. When one of them planted a kiss on his lips, Shields aroused much laughter when he quipped, “I was just thinking. ‘I’d like more of that.’ then thought, ‘That’s not a good idea.’”

Towards the end of his own wake, Shields had wanted to join in the singing of the verses of a special departure song with the classic Celtic folk lyrics of “The Parting Glass”

But since it falls, unto my lot,
That I should rise and you should not,
I’ll gently rise and I’ll softly call,
Good night and joy be with you all.

However, a tired and sick Shields was drifting off to sleep, and later managed to wave to his friends as he was wheeled out of the party smiling and telling everyone, “I’ll see you later.”

Perhaps our social attitudes are changing. Maybe it is time to put fun back into our funerals? Could we turn the wake into the party of a lifetime? Promoters suggest a special day themed for the ancient Scots, or New Orleans Jazz, or even a Dadaist celebration? The happy ending of such a wake would be one way to evade our fears of the unknown. Evasion, as well as denial, have been the classic psychological ways of cheering up gatherings which are overcome with grief. As George Bernard Shaw declared: “Life does not cease to be funny when people die any more than it ceases to be serious when people laugh.”

At my own father’s rushed funeral in an over-populated Italian cemetery on the outskirts of Rome (where myself and my father’s assistant were the sole mourners on a hot July afternoon) I was informed by a brusque Mafia undertaker that my father was being buried in an all-male section of the cemetery. This was because the law forbade any intermingling of the sexes underground! I almost felt sorry for my departed father, but was certain that even in these circumstances he would find ways to circumvent such mafia-bound, Catholic driven restrictions.

“Not everyone will be in a condition to toast Death’s imminence with champagne, as Anton Chekov did,” wrote The Economist in a recent cover story on “How to have a better death.”2 Perhaps our social attitudes are advancing? Perhaps we are beginning to accept that birth-life-death is a unity to be celebrated? The traditional weak jocularities of funeral orations do appear to be gradually vanishing.

Writing for The Guardian, a part-time observer noted that “Just as weddings have gorged themselves into inflated self-promotion, so funerals are now doing he same. They are becoming extravagant forms of self-expression, designed to articulate our individuality.”3 Certainly the burial costs, not including the catering fees of a good wake, are soaring. Being buried in London’s Highgate cemetery (along with Karl Marx and other celebrities) will cost more than £18,000 (over $20,000). This reminds me of the marvelous observation of Woody Allen, “My grandfather had a wonderful funeral… It was a catered funeral. It was held in a big hall with accordion players. On the buffet table there was a replica of the deceased in potato salad.”4

Funerals were not always as somber as those of the Middle Ages or even of the 19th century. In the time of Homer, for example, the Greek funeral was a three act drama. The body was laid out in the first act, the transport to internment was the second act and the third was the lowering of the body or the ashes into the grave. This scenario presented opportunity for the display of family pride, wealth, solidarity and power.5 However, in those days there was a closer intimacy between the living and the dead. Homer described the dead as “ghosts of worn out mortals.” The dead had to be fitted with their obol, or boat fare, fixed between their teeth. This was a payment for being ferried across the river Styx by Charon, the boatman.6 It was also customary to place a laurel crown on the head of those deceased who had “bravely fought their contest with life.”

The classic Greek ceremony around the grave featured the singing of ritualized lamentations. Sometimes hired mourners dressed in long robes also participated. A chorus of women traditionally uttered a refrain of cries to accompany the sung lamentations. At the end of such burials the women left first to go to the house of the deceased to put the finishing touches on the banquet. However, it was Christianity that truly promoted the belief in life after death which had merely been hinted at by the Greeks.

Of all the global ceremonies surrounding death, none can surpass the creative ways Mexicans celebrate rather than mourn the departed. The Mexican “Day of the Dead” originated with the Aztecs, who before the landing of Columbus, had for centuries spent 30 days every August dedicated to death. The invading Spanish, when introducing Christianity, contracted these lengthy festivities into one day around the All Saints’ and All Souls’ days in November. Today, El Dia de los Muertos continues to be a national celebration to honor those who have passed away.

Gravesites are decorated with flowers, angelitos (little papier-mâché angels) balloons and small altars decorated with candles, memorabilia, photos, as well as food in honor of the dead. The same happens at home where those who have died can be reassured that they have not been forgotten and can enjoy a welcome homecoming. All of this is fun. The family may gather at the gravesites of their loved ones and enjoy a picnic in the presence of the departed. Some may play guitars, sing and even dance. The celebrations can continue with an all-night candlelight vigils where good times will be recalled and toasted with a drink or two.

The tragedy of the shortness of life is tempered not only by sorrow but also by pathos and extraordinary creativity. The pan de muertos (Day of the Dead bread) is a loaf sprinkled with cinnamon and decorated with “bones” especially baked for the occasion. Sugar candy in the shape of skulls and bones are also common. For the family it may be a way of saying “We cheated death because we are now eating you!” More serious papier-mâché skulls and skeletons, as well as clay, wood and plastic representations of the dead come in different sizes and are even esteemed for their artistic craftsmanship. I have collected a small but charming group of such Mexican memento to the dead.

These Mexican celebrations are untainted by the promotional intrusions of large corporations. Exploiting loss for commercial gain still seems most inappropriate to many. Inevitably, death in the capitalist world sells these days: Virgin Holidays suggests flying your way out of grief. Indeed, travel therapy may offer a faster escape from sorrow than some contemporary form of “sociotherapy.” I do recommend drawing on the profoundly celebratory aspects of the Mexicans. As The Economist concluded in its cover story: “A better death means a better life, right until the end.”


1Catherine Porter, “Celebrating at his own wake,” The New York Times, May 29, 2017.
2April 29, 2017.
3Giles Fraser, “The rise of so-called happy funerals…” The Guardian, May 12, 2017
4The Nightclub Years
5Robert Garland, The Greek Way of Death, (1985) p.23
6Yorick Blumenfeld, The Waters of Forgetfulness, (2006)

Rodin’s 100th and the path of his successors

I have had the good fortune of being able to see the impressive exhibition in Paris commemorating the Centenary of the passing of the great Auguste Rodin at the Grand Palais. It is but one of the many exhibitions of his marble and bronze sculptures to be shown around the world. This one, however, went one step further by also presenting a large number of sculptures created by those who purportedly followed in his foot-steps.

The beauty of Rodin’s early works is most heartening. For the beauty of the human body was central to what Rodin achieved and tried to perfect. It must be remembered that one of his first acclaimed works in marble was that of an expertly presented naked man which was first acclaimed by the art academicians in Paris and then swiftly rejected because they claimed he must have cheated by making a cast from the body of the model. Infuriated, Rodin swiftly made another marble of the model a half size larger and just as perfect. This time the experts had to accept the artist’s remarkable talent and Rodin’s fame as the greatest sculptor since Michelangelo was in the making.

Beauty was at the center of Rodin’s work. “To tell the truth, every human type, every race has its beauty. The thing is to discover it,” he told his friend Paul Gsell.1 “Beauty is everywhere. It is not she that is lacking to our eye, but our eyes which fail to perceive her.” Character and expression, he claimed, were at the basis of beauty. “There is nothing ugly in art except that which is without character, that is to say, that which offers no outer or inner truth.”

Following the traditions of the greatest Greek sculptors, Rodin said that “The artists in those days had eyes to see, while those of today are blind; that is all the difference. The Greek women were beautiful, but their beauty lived above all in the minds of the sculptors who carved them.” These pronouncements by the master were in my mind when the show in the Grand Palais shifted to the marbles and bronzes of his followers such as: Cesar Baldaccini, Germaine Richier, and Barry Flanagan. What struck me with the large selection of these works was that they no longer were concerned with beauty. They were meant to impress by their horror, brute power, vacuity, the existential pain of being human, and even our humor.

In his later period Rodin became more experimental, trying to catch the dynamics of movement in his sculptures. His statue of a dancing Nijinski climaxed this period. Rodin also started to focus on fragments of the human body before assembling such parts. The way he studied the power and effectiveness of the human hand, and he collected thousands of plaster hands, was most revealing. No sculptor ever focused as acutely on hands and feet as Rodin. Altogether his later more random approach to the presentation of the human form was to be a prelude to sculpture in the 20th century.

Rodin, however, held a deep respect for the materials he worked in, beginning with clay and progressing with plaster and then bronze or marble. Many of his successors mistreated their materials- ripping, stretching, distorting, or compacting the forms. The results ultimately proved provocative but were unrelated, in fact opposed, to the classical school. I found the comparison between Rodin’s famous “The Thinker” and Georg Baselitz’s interpretation of this masterpiece in a huge brutalized and primitively carved “Zero” most painful. Perhaps it was intended to emphasize the collapse of our
humanity following the horrors of World Wars I and II.

Claudia Schmuckli, the curator in charge of Contemporary Art and Programming for the Fine Arts Museums in San Francisco, who has put together a large collection of Rodins at the Legion of Honor Museum, said that “Rodin’s naturalist conception of the body and his embrace of the fragment as a motif in its own right deeply influenced the trajectory of modern sculpture.” Then she announced being thrilled that Sarah Lucas and Urs Fisher had “agreed to consider their work in this context and bring a contemporary perspective to our understanding of Rodin’s work and legacy.”

Now it must be conceded that Lucas and Rodin both had powerful sexual drives but when it came to transferring these into a solid, like bronze, marble or wood, Lucas descended into creating inflatable plastics, immortalized by the huge and hideous yellow plastic penis she produced for the her show where she was representing the UK at the Venice Biennale in 2015. Lucas also plaster cast her bottom half and then later on inserted a cigarette poking out of her now inflatable plastic vagina. To be fair, she also cast the penis of her boyfriend, the composer Julian Simmons, over and over again to make a series called “Penetralia.”2 What I have seen of her attempts at sculpture are unsavory perversions of what art, such as Rodin created, can achieve.3 Lucas may have a sense of humor but her lack of talent, in my mind, blocks any imaginable connection to Auguste Rodin.

Rodin was followed by such great sculptors as Arp, Archipenko, Boccioni, Duchamp-Villon, and Zadkine — all of whose works integrated their powerful artistic forms of expression with their own individual character. All of these sculptors were concerned with the beauty of their creations, much like Rodin, but today “beauty” is generally dismissed as a standard.

Today’s eager art lovers use the hashtag #Rodin100 just to keep track of the host of art museums large and small around the world, with the Rodin Museum in Paris at the center, all of whom are or will be celebrating the works of the greatest of 19th century sculptors. In turn, I find it hard to imagine what standards the 21st century sculptors will produce?


1 Auguste Rodin, Rodin on Art and Artists (with conversations with Paul Gsell). (1983), p. 20
2 Charlotte Higgins, “Sarah Lucas: ‘I have several penises, actually’” The Guardian, May 6, 2015
3 Also to be shown at the American Legion Museum will be “Concrete Boots,” “Nice Tits” and “Hoolian” by Sarah Lucas, one of the over-celebrated ‘Young British Artists.’

WHAT KIND OF A WORLD DO WE WANT TO LIVE IN?

Should the market and the continuing advances in science and technology be the ultimate arbiters of where we are headed? Neither are experiencing controls, and politicians are most reluctant to intervene in the innovations in robotics or the internet. As a writer, the internet has proven to be both a great assistant and a serious enemy: It distracts me from concentrated attention, steals my time and space to think, degrades my memory, and tends to attack my eyes, my spinal column and even my social life. I know I am not alone in these observations. I have not joined Facebook nor do I spend my nights tweeting, like the US President, but the younger generation will simply say that I am out of touch. I counter this by pointing out that technology is undermining bookshops, printed newspapers and human touch.

So where are we headed? Do we really want to transform human nature so that in the 21st century consciousness will be uncoupled from intelligence? Yuval Noah Harari, the popular new writer/philosopher, suggests three more mundane developments in the 21st Century which are likely to overwhelm our human experience on this planet:

  1. Humans will lose their economic and military usefulness. This will lower their value in economic and political terms.
  2. The human collective will retain its values, but not unique individuals.
  3. A new elite of upgraded humans will arise.1

Harari suggests that “The most important question in 21st Century economics may well be what to do with all the superfluous people?” Contending that humans have both physical and cognitive abilities, he points out that taxi drivers are likely to go the way horses did during the Industrial Revolution. He asks, “What will happen once algorithms outperform us in remembering, analyzing and recognizing patterns?” I tend to agree with him that in the dystopian world which may be facing us, real jobs and full-time employment will be reserved for an educated, technology literate elite. The new wave of top corporations such as Amazon, Apple, Facebook, Google and Microsoft simply are not mass employers like Ford, General Electric, GM or Kodak used to be.

The progression of humans on this earth, from tilling the soil in 5000 BC to toiling in an Amazon warehouse, is not always obvious. Early in the 20th Century, Frederick Taylor in his celebrated book, Principles of Scientific Management, regarded workers as cogs in the industrial mass production machine. A century later we are asking why turn workers into machines when robots can do their jobs at a lower cost? Technology has produced ever more efficient ways of monitoring human capabilities and comparing these with the costs and greater profits from robots. Alas, money and profits in the capitalist system are becoming more important than human labor.

Some seventy millennia ago the improved capacity of the Homo sapiens mind started the revolution in which the DNA of one living species was able to dominate the planet. Now a second revolution may be on hand in which the scientific and technological advances of artificial intelligence will triumph over the genetic. Indeed such progress will succeed because of the collaboration between people and algorithms suggests Demis Hassabis the co-founder and CEO of DeepMind. He stated that “If we want computers to discover new knowledge, then we must give them the ability to truly learn for themselves.”2 Please note the personification of the computers!

Harrari adds that “high-tech gurus and Silicon Valley prophets are creating a new universal narrative that legitimizes the authority of algorithms and Big Data.” Just as free-market capitalists believe in the invisible hand of the market so Dataists believe in the invisible hand of the data flow. As the global data processing system becomes all-knowing and all powerful, so connecting to the system will become the source of all meaning. I hesitatingly accept Harrari’s proposal that “We are already becoming tiny chips inside a giant system that nobody really understands.”3

We are now at the stage of accepting that neurons, genes and hormones all obey the same physical and chemical laws of life on earth. However, it will take transcranial stimulators to enable us to decode the electrochemical brain process which determine our perspective, because the two separate brain hemispheres are not always in touch with each other. It is the left hemisphere which is the seat of our verbal abilities including our power to interpret the information that makes sense of our thoughts and experiences. it controls the right hand. The right side is more creative and is crucial in the areas of music, imagination, and intention as well as control of the left hand.

I suspect that ultimately spending untold billions on exploring the brain might be more productive than trillions invested in space exploration. The motivation which underpins the competitive advance of this new technology is in large measure an economic one, as evidenced by the market for shares in high tech. Of course there is also the drive of scientists rushing to publish their pioneering breakthroughs and getting these patented. The growth of technology in many ways resembles that of the market. The market is as blind as it is invisible. However, supply and demand cannot guide all of society. Neither can technology. If everything was determined by the market, the courts, the police, and the army would vanish. So would the entire economy. Mark O’Connell, who had studied this proposition, recognized that growth was mediated by corporations whose real interest was to make eventual profits out of reducing human life into data.4

The efforts of a future in which human minds might be uploaded to computers, is one aspect of Carbon Copies, a “nonprofit organization with a goal of advancing the reverse engineering of neural tissue and complete brains …creating what we call Substrate Independent Minds.” This non-profit group is funded by a number of adventurous millionaire investors who are seeking scientists who work “towards quantum leap discoveries that might rewrite the operating systems of life.”

Somehow I feel human cognition is demeaned when we reduce it to mechanic operations and along computational lines. The internet is proving to be the single most powerful mind-affecting technology ever. As it is the overwhelming flood of new data is extraordinarily disruptive. Many acquaintances suffer from neural addiction to Facebook, Twitter, the latest news and stock market results on top of the steady flow of emails. Studies have shown that cognitive losses from multi-tasking are higher than the cognitive losses from smoking pot. Aided by our smart phones and computers, we are able to multi-task. Apps on our smart phones serve as a calendar, a watch, voice recorder, alarm clock, GPS, camera, flashlight and news headliner. However, there is a cognitive cost for every time we are rapidly switching from one task to the next.5

Surveys show that almost a third of every working day is lost to keeping up with the information flow. The impact on the brain is barely understood and nobody knows how it will affect us socially. What seems certain is that it will transform our existence as homo sapiens has thusfar experienced it. Attention deficit disorders are affecting more and more children. Part of this is ascribed to the swift sequencing of images on the internet. The result is that 3 seconds is about as much time as will hold the attention of kids. How will this affect them in later years?

The universal change of pace already has had extraordinary effects in terms of consumption, obsolescence, renewal, inequality and lots of other conditions. I don’t believe the brain was built for the swift and continuing change that we are currently experiencing. The brain is adaptable and can accommodate small changes here and there, but not the continuity of alterations which are changing the face of the earth, employment, wages, round-the-clock news, ringing mobiles, blogs, and communications. Cyberspace has invaded our public and private lives, our economy and our security as well. While everything is changing, politicians have not appreciated nor understood the social revolution taking place. Few can accept the fundamental and rapid shifts in power. Currently there is no comprehension of who and how would control the new constructs as these arose. AI is certainly going to transform the lives of architects, lawyers and medical professionals. Indeed, it threatens to overwhelm us all. Because we have no idea what the job market will be in 2030 or 2040, we have few notions of what to teach our kids today.

Such realities are far from what may come next: The founder of the 2045 Initiative, Dmitry Itskov, a Russian high tech multimillionaire operating in Silicon Valley, wants “to create technologies enabling the transfer of an individual’s personality to a more advanced nonbiological carrier and extending life, including to the point of immortality.” One of the projects of the 2045 initiative is to create artificial humanoid bodies that would be controlled through brain-computer interface.

A conference in New York by Global Futures 2045 was focused on “a new evolutionary strategy for humanity.” The organizer, Randall Koene, a “trans-humanist,” sees the mind as a piece of software, an application running on the platform of human flesh. The complex transformation starts with the scanning of the pertinent information stored in the neurons of a person’s brain. Although incredibly complicated because of the seemingly endless connections between the neurons, the scan becomes a blueprint for the reconstruction of neural networks which are then transformed into a computational model.” Ultimately this would allow scientists to create any material form which technology permits. The human could choose to become large or small, with feet or with wings, like a tiger or a tree. The prospects may challenge the human imagination, but such projections of AI advances overfill me with forebodings of ultimate horror.

Ultimately, it is the arts that may become our human sanctuary when AI and robots will have replaced teachers, doctors, lawyers and policemen. Creating new jobs will not be the challenge, it will be creating ones where humans can outperform robots. The world we want will be one advancing direct experience, such as all the arts: music, dance, singing, painting, sculpting, writing , and acting . It would also endorse all the sports, running, swimming, , hiking, climbing, walking, and exercising as well as cooking, gardening, keeping pets, caring, loving, and travelling . The joys of all these activities will go far beyond the speculations of Alan Turing and his successors on the connections between randomness and creative intelligence. There is an urgent need for a re-evaluation of our relationship with the wonders of the new technology. *

Currently there is a widespread belief that the advances of technology, the internet and science are both unstoppable and to a large extent, desirable. Silicon Valley’s most prominent figures hold self-serving views that anything which slows scientific innovation is an attack on the public good.6

I liked Rutger Bregman’s outlook in, Utopia for Realists. This young Dutchman suggests that we can construct a society with visionary ideas that could be implemented, like the plans for a universal basic income. As an aging Utopian,
I have always endorsed building castles in the sky. Shocking ideas which are usually rejected out of hand, often return to become popular and even accepted. The questions of ethics in a world that will be so different are daunting. Optimistically, crises- real or perceived- can spark genuine change. Sometimes this can be mind-blowing: As Harari cautioned, human nature is likely to be transformed in the 21st Century because intelligence is uncoupling from consciousness. The countering encouragement he provides is that ultimately” It is our free will that imbues the universe with meaning.”7
—————————————————

1Yuval Noah Harari, Homo Deus, (2016)p.356
2Demis Hassabis, “The Mind in the Machine,” The Financial Times Magazine, April 22, 2017
3Yuval Noah Harari, “In Big Data We Trust,” The FT Magazine,August 27, 2016, p.14
4Mark O’Connell, “Goodbye Body, Hello Posthuman Machine,” The Observer, March 26, 2017
5Daniel J. Levitin, “Why the Modern World Is Bad for Your Brain,” The Observer, January 18, 2015.
6“Computer Security,” The Economist, April 8, 2017, p.75
7Homo Deus, op.cit

* Regulating the internet would require a change in the political mindset in both Europe and the United States. The invasions of privacy and security as well as the massive tax evasion by the largest internet companies have not sufficed to bring about the essential changes. The two prime decisions made by the creates of the internet and principally by Tim Berners-Lee were that there would be no central control or ownership and that the network could not be dominated by any particular application.

NEW TRENDS IN APPRENTICESHIPS AND INTERNSHIPS IN THE ARTS

This blog is an attempt to deal with my deep concern for the millions of youths globally who cannot find jobs and who are not only angry but are also bewildered about what to do, where to turn to. Meanwhile our profit-focused planet is steadily introducing robots and new technology further challenging the employment of humans. The challenges are daunting.

In the past, an agricultural life did not require formal education. For the minority that lived in cities, most young men followed their father’s occupation or that of family members. Apprenticeship was viewed as the natural next step for those who did not go to school or who finished only the first level of education. The training they received provided them with skills that made them useful to society at large. The industrial revolution rapidly changed this with many youths entering large mills, coal mines and other industries, (as well as the military) while only a select few of the better-off went to university. The second half of the 20th century saw ever increasing numbers go on to higher education as society came to regard a college diploma as a kind of white-collar job guarantee.

In the 21st century many of the enormous numbers of college graduates who had not majored in the sciences, engineering or the law suddenly faced the reality that genuine jobs were few and far between and that they had not been trained or given skills that would enable them to find work. Temporary service jobs were just that. In some countries apprenticeships were one way forward, in others internships (the new socially acceptable nomenclature for apprenticeships) became more marketable.

Internships now are flourishing, but are still restricted in a large extent to those who have the means of travel or enjoy the support and housing of their parents. Most internship are supported by the state and large corporations. They also are focused on industries rather than on commercial arts or crafts. Art college can prepare those at the end of their teens for a great many things, but once they complete their education, they need to develop the skills that will prepare them for the real world. One way to gain an advantage over other students in the field is to land an art internship which is likely to provide the tools and experiences necessary to develop their talent and optimistically land them with jobs.

Many art galleries hire interns to fill the gaps at little cost. Those seeking a “hands on” experience, can try to attain an internship under an art director, a graphic designer, or even an art auctioneer. An internship will help provide a better idea of where one fits in, what technologies and processes one needs to learn and what specific types of projects one might like to work on as a creative professional. With so many internship programs now available in a wide variety of creative organizations it is possible for applicants to choose the specific internship experience that could propel them into a career in the arts. It’s no secret that internships are one of the best ways to land a steady job offer. Becoming a high-performing intern is a superior way to improve one’s employment prospects, so many students tend to focus on the status and nature of the company to which they are applying as crucial to their internship search.

Apprenticeships, which existed for over two millennia, are another way to enter the arts, but they mostly have been in decline over the past few decades. The intimacy of this kind of learning is no longer respected as it was in previous eras. In the world of industry, apprenticeship has generally become less common. Fortunately apprenticeship is still flourishing in much of the service industry ranging from the culinary domain to such varied professions as hairdressing, massage, and design. Of course, in the arts and crafts such as pottery and sculpting, it remains essential.

I should like to see more art-connected artisans entering the work place and furthering this historic tradition. I deeply appreciate the way potters are taking clay into different spheres. The craft and the art are separate, but the truly fine art ceramicists are becoming recognized for their creative talent. As one curator, Sara Matson, explained: “There is an engagement with materials again, a sense of rejecting the digital and getting back to the visceral, and there’s nothing more visceral than clay.”1

Personally, I admire the way Italy’s celebrated foundries, where many of the artisans who work on making molds, polishing etc, started as apprentices at the age of 14. As these young people develop their skills they tend to enter deeply gratifying lives. The same opportunities arise in the media and publishing, in photography, design, furniture, glass-blowing and even the performing arts. However, in Italy a large portion of apprenticeships demanding individual skills and passions are still restricted to a family setting in smaller social communities such as towns and villages. But for how much longer can this last as the big cities in the north focus on specialized skills and the rest enter menial service jobs? Blacksmiths, rope-makers, saddlers, tanners, weavers and wheelwrights have all but disappeared. On the other hand, artisanal bakers, beer-makers and cheese-makers are gaining popularity.

Apprenticeships are now generally focused on helping those who are at the beginning or crossroads of their careers to earn while they learn. They gain occupational skills as they contribute to and participate in the production process. Often they combine work-based learning and classroom instruction over a two- to four-year period leading to steady employments as well as recognized and valued credentials. Unlike the part time jobs frequently held by high school and college students, apprenticeship improves such employability skills as teamwork, communication and responsibility. Mentoring components, which I accentuated in my second blog three years ago, serve to increase the motivation of the young apprentices whose training primarily revolves around supervised work. Such apprenticeship gives “graduates” pride as well a sense of occupational identity so important to a minority.

Developing the necessary support system for apprenticeship programs demands action from various levels of financial support at local, state, and national levels. I find the ways apprenticeships vary from country to country fascinating. In the United States the federal subsidies to encourage apprenticeship programs are far lower than those of other countries. US apprentices make up only a tenth of the comparable work forces in apprenticeship of Canada, the UK, Australia, Germany and Switzerland. Shamefully, the total annual US government funding for apprenticeship is less than $400 per participant. This compares to the much higher annual national spending for students attending two-year public colleges which is around $12,000 per participant. This low contribution to apprenticeship can partially be attributed to a lack of public and political support. However, it must be noted that only a minority of firms actually go on to hire apprentices in the US. The “academic only” college focus of policymakers in Washington deprives many young people of access to alternative pathways towards rewarding careers. Apprenticeship could narrow the post-secondary school achievement gaps in both race and gender. Providing participants with wages while they learn has proven to be particularly beneficial. Mentors and supervisors of those in apprenticeships provide the close monitoring and feedback which ultimately help a focus on good performance both in the classroom and while at work.

Prof Robert Lerman, who has been an expert on apprenticeship programs in the US, has pointed out that interest was increasing in Washington because of the recent successes of Britain and Switzerland which have been copied by training groups in South Carolina, Colorado and Wisconsin. (Before the arrival of Donald Trump, that is.) Prof Lerman declared that: “A robust apprenticeship system is especially attractive because of its potential to reduce youth unemployment, improve the transition from school to career, upgrade skills, raise wages of young adults, strengthen a young worker’s identity, increase US productivity, achieve positive returns for employers and workers, and use limited federal resources more effectively.”2 In the various American state programs, the course work of the apprentices is usually equivalent to one year of community college. If they complete their training they receive a valuable credential attesting to their mastery of a skill or skills required in their field.

The experience of apprenticeships in the United Kingdom contrasts dramatically with that of the United States. More than 800,000 apprentices now make up close to 3 percent of the national work force. With public spending of close to $2.5 billion per year, apprenticeship has moved into the social mainstream. National branding, marketing and PR by private training organizations, firm-based initiatives as well as Further Education Colleges have been remarkably successful: apprenticeship positions rose from about 150,000 in 2007 to close to a million a decade later. The result is that over half the young population chooses not to follow an academic path. Being career-focused, almost a third of these English teenagers know what they want to do in the future. Perhaps that is why there are now over 1,500 different apprenticeships being offered by 170 national industries. Starting this April, all UK employers with a payroll of £3 million are required to pay into the Apprenticeship Levy which was set up by the government to fund apprenticeship training including new digital training vouchers.

I truly admire The National Skills Academy for the Creative and Cultural, a charity which focuses on apprenticeships with the support of the Arts Council of the UK. In cooperation with the Skills Academy network, a program designed to improve training in the creative and cultural industries has been established. Creative Choices is a resource for anyone wanting to work in a creative career. Job listings are spread by employers across the country and all the jobs, internships, and apprenticeships now must meet the National Minimum Wage requirements.

Creative Choice events give 13- to 16-year-olds in the UK the opportunity to learn about working in music, theater, design and cultural heritage. At Production Days, aspiring backstage crews are given the opportunity to work at some of the biggest music festivals. And in the Technical Masterclasses, bespoke training is provided for young aspiring professionals with some of the leading directors, producers, and theatrical stage managers in the world.

The Backstage Centre has been built, as part of a major regeneration project in London’s Thames Gateway, to provide a training and rehearsal facility to meet the demand of the industry for over 6,500 new jobs in the live music and theater industries this year. This Centre is being used by the international music, film and theater industries as a performance, rehearsal and filming venue. Any profits made through commercial activities directly fund the charitable work to help the future creative workforce. The Center has been part of the program “Building a Creative Nation” which was launched four years ago to ensure that the next generation can continue to access creative careers in what is widely hailed as the world’s foremost national creative sector.

I have been surprised that in Switzerland, whose Helvetian apprenticeship program is much prized and acclaimed, private companies spend around $5 billion a year to ensure that the workforce pipeline is filled with young, passionate, talented people who exude hope and belief in their future. Many of the higher level executives in Switzerland have participated in the program and appreciate its rigors and quality. These executives would not hire those who had not completed the national apprenticeships. The result is that a very high proportion of parents of all socioeconomic backgrounds encourage their children to enroll in apprenticeships. As a consequence, Swiss youth unemployment is below 2.5 percent – as compared to over 12 percent in the US.

Particular importance is attached in the Swiss program to both hope and to personalization in which students are urged to learn not only specific task-based skills, but also how to be self-directed, self-sufficient, planning their time and work effectively. Moreover, 30 percent of graduates of the apprenticeship program are likely during their lifetimes to earn a third more than their equivalent non-graduates. It is important to note here that the Swiss system is not rigid. It enables students to move freely back and forth between the academic path and the vocational. Upon graduation they can continue working in their field then switch to a different one, or pursue advanced professional degrees. All are encouraged to continue their personal and professional development throughout their lives.

“After studying and visiting the Swiss apprenticeship system, I realized that our current system of career and technical education will not sustain the needs of our business and the state of Colorado,” stated John Kinning, the head of RK Mechanical. A group president of the Kaiser Foundation Hospitals, Donna Lynne, added that the Swiss system has “de-stigmatized young people who choose a post-secondary career versus going to college.” She also noted that the program might help to lower school dropout rates, a huge problem in many districts of Colorado. Because young people get to build job skills and get paid while going to school part-time, they are less likely to quit. Only a few other states, such as Georgia and Wisconsin, now provide apprenticeships to youths aged sixteen to nineteen. This offers an alternative to the “academic only” college focus of US policy makers which fails to narrow the achievement gaps in both gender and race.

I do want to point out, however, that the reforms inspired by the Swiss and the German apprenticeship programs generally fail to cover the arts. In the United States art colleges can give students the background and prepare them for many things, but once they have completed that education they need to develop skills that will prepare them for the real world by then landing an art internship. Those looking for such an internship at a particular company can begin their search at Internships.com or Chegg.com where they can find art related opportunities with highly different organizations. Many art galleries exploit young interns to hang their shows and to run errands, however such internships can help neophytes to get a better notion of where they might fit in, what specific kinds of project they might like to work on as creative professionals and what technologies and processes they need to master.

Ultimately, the young hopefuls in the arts everywhere face the same challenge: How can I earn enough to enable me to create the way I want to, the way I need to? They may have learned some of their skills in schools, but they want to let their imaginations produce works to be appreciated for their emotional power or, perhaps, just for their beauty. Wherever they may find themselves — as cartoonist, dancer, illustrator, jeweler, photographer, sculptor, or creator in one of the many genres of the arts, they will want to assert their vision, their drive, their needs, their individual skills and their passions. For them to achieve this support is crucial, irrespective of whether it be from family, friends, art groups, local, state or private funding, apprenticeships or even the increasingly popular internships. I believe the importance of such new social formats has to be promoted and celebrated not only for the younger generation but to sustain the creative futures of all our global societies.

1Curator of the exhibition now running in St Ives, “That Continuous Thing: Artists and the Ceramics Studio, 1920 to Today,” see Tom Morris, “Behind the Veneer” The Financial Times, March 25, 2017.

2Robert Lerman, “Expanding Apprenticeships in the United States,” Brookings, June 19, 2015