Suicide Epidemic: Since NIH-funded Clowns Do Not Want to Discuss It, We Will

A large number of NIH-funded parasites waste taxpayers’ money with the excuse that they are working toward improving the health of Americans. Francis Collins, the head of NIH, uses every opportunity to tell everyone how research funded by NIH helps in improving the life expectancy of Americans (a flat out lie). Yet, when research by Deaton and Case uncovered that the life expectancy of Americans of prime age (45-54) was falling, primarily due to rising suicides, Collins and his minions went completely silent.

We have not come across a single major science journal or blog giving enough space on this health epidemic spreading rapidly. Also, the word epidemic is appropriate given the rising global trend. For example,

New Zealand: Suicide toll reaches highest rate since records kept

Canada: What’s Behind The Surge In Suicide Attempts Among Canada’s Indigenous Population

More than 100 of Attawapiskat’s 2,000 members have tried to end their lives in the last seven months, including 11 youth on Saturday alone.

The following thoughtful essay from ‘The Automatic Earth’ blog shines some light on what is going on in New Zealand.


Finance + Stress = Suicide

Nelson Lebo: “Our already horrendous suicide rate hit a new record high last year.” The news of New Zealand’s suicide rate did not surprise me when I heard it on the radio earlier this week. Anyone who pays attention to global trends could see this coming. “Psychotherapists say we need a wide-ranging review into the mental health system before there are more preventable deaths” reported Newstalk ZB.

At lighter moments I joke that the best thing about living in New Zealand is that you can see worldwide trends that are heading this way, but the worst part is that no-one believes you. This is not a lighter moment. Suicide is a serious issue and one that is growing dramatically among my peer group: white middle-aged men.

The first people to notice the emerging pattern in the United States were Princeton economists Angus Deaton and Anne Case. The New York Times reported on 2nd November, 2015 that the researchers had uncovered a surprising shift in life expectancy among middle-aged white Americans – what traditionally would have been considered the most privileged demographic group on the planet.

The researchers analyzed mountains of data from the Centers for Disease Control and Prevention as well as other sources. As reported by the Times, “they concluded that rising annual death rates among this group are being driven not by the big killers like heart disease and diabetes but by an epidemic of suicides and afflictions stemming from substance abuse: alcoholic liver disease and overdoses of heroin and prescription opioids. The mortality rate for whites 45 to 54 years old with no more than a high school education increased by 134 deaths per 100,000 people from 1999 to 2014.”

The most amazing thing about this discovery is that the Princeton researchers stumbled across these findings while looking into other issues of health and disability. But as we hear so often, everything is connected. A month before releasing this finding Dr. Deaton was awarded the Nobel Prize in Economics based on a long career researching wealth and income inequality, health and well-being, and consumption patterns.

The Royal Swedish Academy of Sciences credited Dr. Deaton for contributing significantly to policy planning that has the potential to reduce rather than aggravate wealth inequality. In other words, to make good decisions policy writers need good research based on good data. Too often this is not the case. “To design economic policy that promotes welfare and reduces poverty, we must first understand individual consumption choices. More than anyone else, Angus Deaton has enhanced this understanding.”

Days before hearing the news about New Zealand’s rising suicide rate I learned of another major finding from demographic researchers in the United States. For the first time in history the life expectancy of white American women had decreased, due primarily to drug overdose, suicide and alcoholism. This point is worth repeating as it marks a watershed moment for white American women. After seeing life expectancies continually extend throughout the history of the nation, the trend has not only slowed but reversed. Data show the slip is only one month, but the fact that it’s a decrease instead of another increase should be taken as significant milestone.

Please note that the following sentence is not meant in the least to make light of the situation, but is simply stating a fact. The demographic groups that are experiencing the highest rates of drug overdose, suicide and alcoholism are also the most likely to be supporters of Donald Trump in his campaign for the U.S. Presidency. It does not take a Nobel Laureate to observe a high level of distress among white middle-class Americans. Trump simply taps into that angst.

As reported by CBS News, “The fabulously rich candidate becomes the hero of working-class people by identifying with their economic distress. That formula worked for Franklin D. Roosevelt in the 1930s. Today, Donald Trump’s campaign benefits from a similar populist appeal to beleaguered, white, blue-collar voters – his key constituency.”

I don’t blame most Americans for being angry. That the very architects of the global financial crisis have only become richer and more powerful since they crashed the world economy in 2008 is unforgivable. The gap between rich and poor continues to widen and the chasm has now engulfed white middle-aged workers. As the Pope consistently tells us, wealth and income inequality is the greatest threat to humanity alongside climate change.

Instead of going down the Trump track for the rest of this piece, I’d rather wrap it up by bringing the issue back to Aotearoa (New Zealand) and my small provincial city of Whanganui. To provide some background for international readers, the NZ economy relies significantly on dairy exports and many dairy farmers hold large debts. Dairy prices are known for their volatility, and recently the payouts have dropped below break-even points for many farmers.

Earlier this month Primary Industries Minister Nathan Guy announced that the government would invest $175,000 to study innovative, low cost, high performing farming systems already in place in New Zealand. reported, “The government is set to pick the brains of New Zealand’s top dairy farmers in an effort to help those struggling with the low dairy payout.”

That is great news, but the government’s investment in researching the best of the best farmers is a pittance when compared with what is spent addressing issues of depression and suicide prevention among Kiwi farmers. Isn’t this a case of putting the cart ahead of the horse, or treating symptoms instead of causes?

Research shows that financial stress contributes significantly to the increasing suicide rates here and abroad. We know that innovative farmers who use low-input/high-performance systems are more profitable that their conventional farming brethren. Would it then be a stretch to conclude that depression and suicide is much lower among these innovative and profitable farmers? At the same time, research shows that wealth and income inequality in our more urban centres contribute to anti-social behaviours such as crime, domestic abuse and illegal drug usage.

Angus Deaton, the Nobel-winning economist, would argue that in order for policy planners to address these issues effectively they must understand the underlying causes and resultant costs. Thankfully, we do see glimmers of that from central government instead of the usual neoliberal claptrap. Credit must be given to Finance Minister Bill English for his actuarial approach to some social issues rather than the inaccurate dogmatic position often adopted by the right.

But closer to home for me, such enlightened policy planning has yet to reach our city by the awa (river). To start off, the Council’s rates structure is stunningly regressive, clearly taking significantly higher proportions of household wealth from low-income families than from high-income families. If we believe the research in this field (ie, The Spirit Level, etc) wouldn’t we expect the widening gap between rich and poor to result in even more anti-social behavior in our city that already suffers from reputation problems nationwide?

Secondly, the council’s vision documents and long-term plan are nearly devoid of intelligent strategies to address the underlying issues of anti-social behaviour, depression, poor health, and domestic problems that afflict our community. The Council pours mountains of money into an art gallery and arts events while providing token services and events for low-income families.

Will it take our own Trump or Sanders running for office to stimulate a populist revolt against regressive policies that potentially do more harm than good to our community? What will it take for us to finally get it? I first wrote about these issues in our city’s newspaper, the Chronicle, two and a half years ago… but, apparently, no one believed me. Welcome to provincial New Zealand!

Population Genetics of Ancient Jewish Population in India

‘Ancient’ Bene Israel Jews and late-arrived Baghdadi Jews in India started the Bollywood movie industry. Many famous early Indian actresses also came from these communities. This is not common knowledge in India, because those actresses took Muslim (Firoza Begum) or Hindu (Sulochana, Pramila) screen names.


Baghdadi Jews like David Sassoon also played big role in establishing Bombay as a major trading center.

A new population genetics study looks at the historic roots of the older (and of more ‘mysterious’ root) among those two groups. Bene Israel Jews were at times considered as one of the ‘lost tribes’.

The Genetics of Bene Israel from India Reveals Both Substantial Jewish and Indian Ancestry

The Bene Israel Jewish community from West India is a unique population whose history before the 18th century remains largely unknown. Bene Israel members consider themselves as descendants of Jews, yet the identity of Jewish ancestors and their arrival time to India are unknown, with speculations on arrival time varying between the 8th century BCE and the 6th century CE. Here, we characterize the genetic history of Bene Israel by collecting and genotyping 18 Bene Israel individuals. Combining with 486 individuals from 41 other Jewish, Indian and Pakistani populations, and additional individuals from worldwide populations, we conducted comprehensive genome-wide analyses based on FST, principal component analysis, ADMIXTURE, identity-by-descent sharing, admixture linkage disequilibrium decay, haplotype sharing and allele sharing autocorrelation decay, as well as contrasted patterns between the X chromosome and the autosomes. The genetics of Bene Israel individuals resemble local Indian populations, while at the same time constituting a clearly separated and unique population in India. They are unique among Indian and Pakistani populations we analyzed in sharing considerable genetic ancestry with other Jewish populations. Putting together the results from all analyses point to Bene Israel being an admixed population with both Jewish and Indian ancestry, with the genetic contribution of each of these ancestral populations being substantial. The admixture took place in the last millennium, about 19–33 generations ago. It involved Middle-Eastern Jews and was sex-biased, with more male Jewish and local female contribution. It was followed by a population bottleneck and high endogamy, which can lead to increased prevalence of recessive diseases in this population. This study provides an example of how genetic analysis advances our knowledge of human history in cases where other disciplines lack the relevant data to do so.

Transistors, Translation and tRNAs

Prior to 1960s, most computers were analog computers.

World War II era gun directors, gun data computers, and bomb sights used mechanical analog computers. Mechanical analog computers were very important in gun fire control in World War II, The Korean War and well past the Vietnam War; they were made in significant numbers.

The FERMIAC was an analog computer invented by physicist Enrico Fermi in 1947 to aid in his studies of neutron transport.[12] Project Cyclone was an analog computer developed by Reeves in 1950 for the analysis and design of dynamic systems.[13] Project Typhoon was an analog computer developed by RCA in 1952. It consisted of over 4000 electron tubes and used 100 dials and 6000 plug-in connectors to program.[14] The MONIAC Computer was a hydraulic model of a national economy first unveiled in 1949.


In those years, many universities and companies were building analog computers, although there were rare exceptions.

Computer Engineering Associates was spun out of Caltech in 1950 to provide commercial services using the “Direct Analogy Electric Analog Computer” (“the largest and most impressive general-purpose analyzer facility for the solution of field problems”) developed there by Gilbert D. McCann, Charles H. Wilts, and Bart Locanthi.[15][16]

Educational analog computers illustrated the principles of analog calculation. The Heathkit EC-1, a $199 educational analog computer, was made by the Heath Company, USA c. 1960.[17] It was programmed using patch cords that connected nine operational amplifiers and other components.[18] General Electric also marketed an “educational” analog computer kit of a simple design in the early 1960s consisting of a two transistor tone generator and three potentiometers wired such that the frequency of the oscillator was nulled when the potentiometer dials were positioned by hand to satisfy an equation. The relative resistance of the potentiometer was then equivalent to the formula of the equation being solved. Multiplication or division could be performed depending on which dials were considered inputs and which was the output. Accuracy and resolution was limited and a simple slide rule was more accurate; however, the unit did demonstrate the basic principle.

Analog computers are completely gone from the scene today, and the transition happened in several phases. In 1930s, Alan Turing developed the mathematical principles of an abstract digital machine to show that it could do arbitrary calculations. A number of digital machines were built in the 40s and 50s, but the big transition could start only after the invention of silicon integrated circuits of complementary-symmetry metal–oxide–semiconductor (CMOS) or MOSFET switches. This is the predominant technology today.

However, despite such prevalence of digital computers everywhere, the analog world exists and it exists right beneath the digital circuitry. Think about it this way. If the natural world (of silicon materials) is analog and the computer is digital, the transition from analog to digital must happen somewhere, right? That transition is carefully hidden inside the MOSFET.

Between 1997-2000, I worked at a semiconductor company, and the task of our group was to hide the analog world from those, who did not want to face it. The task gets increasingly difficult with every round of miniaturization, because all elements of the circuit including n- and p-transistors, gate capacitors and even electrical wires connecting them start to remind the world that the analog world exists.



Switching to the other side of the living world, Darwin’s model of evolution was entirely analog. Darwin was not aware of Mendel’s discovery of ‘genes’, which introduced discreteness into evolution.

Biologists did not find out about Mendel’s experiment until ~1900s and then a huge conflict ensued between those, who sided with Darwin’s continuous evolution and those, who sided with Mendel’s discrete evolution.

Mendel’s results were quickly replicated, and genetic linkage quickly worked out. Biologists flocked to the theory; even though it was not yet applicable to many phenomena, it sought to give a genotypic understanding of heredity which they felt was lacking in previous studies of heredity which focused on phenotypic approaches. Most prominent of these previous approaches was the biometric school of Karl Pearson and W. F. R. Weldon, which was based heavily on statistical studies of phenotype variation. The strongest opposition to this school came from William Bateson, who perhaps did the most in the early days of publicising the benefits of Mendel’s theory (the word “genetics”, and much of the discipline’s other terminology, originated with Bateson). This debate between the biometricians and the Mendelians was extremely vigorous in the first two decades of the twentieth century, with the biometricians claiming statistical and mathematical rigor,[37] whereas the Mendelians claimed a better understanding of biology.[38][39] (Modern genetics shows that Mendelian heredity is in fact an inherently biological process, though not all genes of Mendel’s experiments are yet understood.)[40][41]


Fisher’s mathematics (1920s and 30s) brought those two camps together, and then the series of discoveries (1950s and 60s) starting with Watson-Crick’s solving of DNA structure further confirmed the existence of digital information layer in every living organism. Since then research in the living world moved together with the technological advances in the computing world. Advances in digital technology allowed better experimentation on living world, and those experiments focused more on genetics and genomics. As a result, we learned a lot about how the information layer of living organisms evolved over time and plan to learn more through the completion of 10,000 insect genome project, 1,000 fish transcriptome project and various other comparative sequencing projects.

However, when you compare the two narratives, do you notice something missing from the second one? In case of the semiconductor industry, a dedicated group hides the existence of analog world from vast majority of practitioners, but who does so for the living world? Moreover, where is the analog connection of information layer of life hidden?

As you can guess from the title, the answer most likely lies in ‘translation’ and ‘tRNAs’. How so will be the topic of another commentary.

Was Google Really Censoring Elhaik’s Khazar Research in 2013?

In 2013, Dr. Elhaik complained about his home page at John Hopkins University mysteriously disappearing from google searches right after his first Jewish genomics paper started to gain attention. We reproduced his complaint here, and then his page came back on top again after a few days.

Google Censorship: Scariest Thing for Academic Freedom

Readers found those accusations rather hysterical and attributed the disappearance (and reappearance) to something as mundane as ‘search algorithm update’. It was a time before Snowden release, when the crooks running Google and Facebook were seen as various reincarnations of Buddha.

It is increasingly becoming clear that those ‘search algorithms’ and ‘trending news’ may be more than something generated by a fully-automated cluster of servers near Oregon-Washington border processing large amount of clicks and links. In 2014, RT posted an article on “Censorship war: Website unmasks links Google is blocking from search results“, but the evidence of censorship was still indirect. More direct evidence regarding Facebook came yesterday, when Gizmodo published –

Want to Know What Facebook Really Thinks of Journalists? Here’s What Happened When It Hired Some

According to former team members interviewed by Gizmodo, this small group has the power to choose what stories make it onto the trending bar and, more importantly, what news sites each topic links out to. “We choose what’s trending,” said one. “There was no real standard for measuring what qualified as news and what didn’t. It was up to the news curator to decide.”


They were also told to select articles from a list of preferred media outlets that included sites like the New York Times, Time, Variety, and other traditional outlets. They would regularly avoid sites like World Star Hip Hop, The Blaze, and Breitbart, but were never explicitly told to suppress those outlets. They were also discouraged from mentioning Twitter by name in headlines and summaries, and instead asked to refer to social media in a broader context.

News curators also have the power to “deactivate” (or blacklist) a trending topic—a power that those we spoke to exercised on a daily basis. A topic was often blacklisted if it didn’t have at least three traditional news sources covering it, but otherwise the protocol was murky—meaning a curator could ostensibly blacklist a topic without a particularly good reason for doing so. (Those we interviewed said they didn’t see any signs that blacklisting was being abused or used inappropriately.)

CNET summarized the article in one line that says all –

Facebook’s trending news feed may be a sham

The world’s largest social network calls out the top trending stories on its site, which is visited by more than a billion people every day. But the list may be manipulated by Facebook’s employees, who are allegedly deciding what’s “trending” based on their political views.

Zerohedge reports –

Facebook Workers Admit They “Routinely” Suppressed Conservative News

Therefore, we reopen our old question – was Google really censoring (i.e. manually filtering out) Elhaik’s Khazar research in its search results? Answer to this question is important, because many researchers now rely on searches at google scholar, whose results are now seen as sacrosanct as Google searches in 2013.

We presume answers will not be coming anytime soon given that Google has bigger issues to resolve :)

Google planned to help Syrian rebels bring down Assad regime, leaked Hillary Clinton emails claim

Clinton email reveals: Google sought overthrow of Syria’s Assad


Those worshiping the other false God named altmetric may take a look at Elhaik’s –

On the fallacies of Altmetric OR how I fell asleep in Sheffield and woke up in Louisiana

Caught red handed

I recalled that one of my Twitter friends had 124 followers who retweeted about my study, as you can see below, this is nearly TWICE than what Altmetric reports that I currently have. Sloppy Altmetric didn’t bother to fix that number and only some of those 124 tweets were considered towards the 73 count


Organoids and the Coming Medical Revolution – (ii)

Previous segment – Organoids and the Coming Medical Revolution – (i)

In 2006, Yamanaka surprised the whole world by announcing that he could turn adult mouse fibroblasts into pluripotent stem cells by applying four transcription factors – Sox2, Oct4, Klf4 and c-Myc. Those pluripotent stem cells can then be converted to any other cell lineage of the body.

To fully appreciate the significance of Yamanaka’s discovery, you need to also consider the political climate of 2006. The human genome assembly was completed around 2000-2001, and the researchers were looking for ways to use the genome to improve human health. Stem cell research was recognized as a potentially beneficial area, but work on human embryonic stem cells also got associated with killing babies. So, Federal funding stopped for using human embryonic stem cells in research. To add to complications, the research institutes had to build separate facilities to isolate federally funded research from human stem cell research from other funding.

2006 being an election year, Bush made a big deal of vetoing a stem cell bill to hide his Iraq war failure.


Stem Cell Bill Gets Bush’s First Veto

By Charles Babington
Washington Post Staff Writer
Thursday, July 20, 2006

President Bush issued the first veto of his five-year-old administration yesterday, rejecting Congress’s bid to lift funding restrictions on human embryonic stem cell research and underscoring his party’s split on an emotional issue in this fall’s elections.

At a White House ceremony where he was joined by children produced from what he called “adopted” frozen embryos, Bush said taxpayers should not support research on surplus embryos at fertility clinics, even if they offer possible medical breakthroughs and are slated for disposal.

The vetoed bill “would support the taking of innocent human life in the hope of finding medical benefits for others,” the president said, as babies cooed and cried behind him. “It crosses a moral boundary that our decent society needs to respect.” Each child on the stage, he said, “began his or her life as a frozen embryo that was created for in vitro fertilization but remained unused after the fertility treatments were complete. . . . These boys and girls are not spare parts.”

Within hours of Bush’s announcement, the House, as expected, fell short in a bid to override the veto, extinguishing the issue as a legislative matter this year but not as a political matter. Democrats said voters will penalize GOP candidates for the demise of a popular measure, and predicted the issue could trigger the defeat of Bush allies such as Sen. James M. Talent, who faces a tough reelection battle in Missouri.

Yamanaka’s discovery came as a big sign of relief, because turning one’s skin cells or other cells back into (induced) pluripotent stem cells by applying enzymes bypassed those ethical questions. The action was equivalent to using a time machine on a developed cell and turning the clock back. Books of moral dilemma do not have a chapter on negative time actions.

Readers will enjoy the following interview of Yamanaka, where he talked about his amazing discovery.

What inspired him to choose those four transcription factors? More on that topic in the following post, but here is a hint (h/t: Developmental Biology)


Transfer RNAs and Neurodegenerative Disorders

Among all biomolecules within the cell, tRNAs got the least respect. Their supposed importance ended right after the ‘adaptors’ related to entries in the genetic code table were identified (mid-60s). Since then, the attention shifted to more complex RNAs like the rRNAs.

Interestingly, tRNAs are making major comeback and that too in the most unexpected places. Several groups linked them with neurodegenerative disorders. Is that due to too much money looking for causes of complex diseases, or are tRNAs indeed more remarkable than previously thought?

Here is a paper from 2013 that made major splash by linking tRNAs with diseases, but there are a few additional ones.

CLP1 links tRNA metabolism to progressive motor-neuron loss

CLP1 was the first mammalian RNA kinase to be identified. However, determining its in vivo function has been elusive. Here we generated kinase-dead Clp1 (Clp1K/K) mice that show a progressive loss of spinal motor neurons associated with axonal degeneration in the peripheral nerves and denervation of neuromuscular junctions, resulting in impaired motor function, muscle weakness, paralysis and fatal respiratory failure. Transgenic rescue experiments show that CLP1 functions in motor neurons. Mechanistically, loss of CLP1 activity results in accumulation of a novel set of small RNA fragments, derived from aberrant processing of tyrosine pre-transfer RNA. These tRNA fragments sensitize cells to oxidative-stress-induced p53 (also known as TRP53) activation and p53-dependent cell death. Genetic inactivation of p53 rescues Clp1K/K mice from the motor neuron loss, muscle denervation and respiratory failure. Our experiments uncover a mechanistic link between tRNA processing, formation of a new RNA species and progressive loss of lower motor neurons regulated by p53.

Organoids and the Coming Medical Revolution – (i)

Among various biotechnology inventions of the last few years with potential to revolutionize medicine, nothing excites us more than growing of three-dimensional human organoids on matrigel. Therefore, we plan to devote a number of posts on this topic to keep our readers aware of the practices, potentials and challenges.

If you have not heard of organoids at all, the following few news articles will help you get started. In future posts, we plan to cover the primary literature in depth, but let us start with the popular media.

Scientists grow functional kidney organoid from stem cells

There are many diseases that attack specific organs, landing patients on a transplant list. Unfortunately, our bodies have markers that identify an organ as “self,” which makes it difficult to find an organ match. Many individuals die waiting for an organ transplant because a match can’t be found.

Research on stem cells—a type of cell that is able to transform into nearly any cell type—has raised hopes of treating organ failure. Researchers envision using these cells to grow fully functional organs.

A functional organ is similar to a machine. Organs contain many interacting parts that must be positioned in a specific configuration to work properly. Getting all the right cell types in the appropriate locations is a real challenge. Recently, a team of scientists has met that challenge by using stem cells to grow a tissue, termed an organoid, that resembles a developing kidney.

Scientist: Most complete human brain model to date is a ‘brain changer’

Scientists at The Ohio State University have developed a nearly complete human brain in a dish that equals the brain maturity of a 5-week-old fetus.

The brain organoid, engineered from adult human skin cells, is the most complete human brain model yet developed, said Rene Anand, professor of biological chemistry and pharmacology at Ohio State.

The lab-grown brain, about the size of a pencil eraser, has an identifiable structure and contains 99 percent of the genes present in the human fetal brain. Such a system will enable ethical and more rapid and accurate testing of experimental drugs before the clinical trial stage and advance studies of genetic and environmental causes of central nervous system disorders.

An Inhibitor reduces the effects of Zika virus in Brain Organoid

An investigation on zika has shown that it modifies the function of the molecule and causes a TLR3 cell ‘suicide’ in the brain. The experiment conducted in laboratory brains, seeks to reduce the aggressiveness of infection, resulting in microcephaly fetuses, using an inhibitor. Early results indicate that cells infected by the virus decreased by 16% in five days.


What is an organoid?

Organoids are tiny organs grown in the lab on petri-dish.


How are they grown?

Usually the researcher starts with pluripotent stem cells and applies a series of enzymes over several days. The experiment is done in a special material (e.g. Matrigel) that allows the growth to take place in three-dimension. Over time, the stem cell divides and develops specific organs depending on the chosen enzymes.

For example, here is the ‘recipe’ to grow human lungs, based on the 2015 paper – “In vitro generation of human pluripotent stem cell derived lung organoids“. Let us quickly explain what is going on and then you can read the actual methods.

There are three steps explained in highly simplified form.
(i) The researchers start with pluripotent stem cells (PSC) and apply Activin A for 3 consecutive days. This process turns PSCs into definitive endoderm.
(ii) The researchers apply Noggin and SB431542 for another 4 days to turn definitive endoderm into anterior foregut.
(iii) The researchers apply Noggin, SB431542 and FGF4 for another 4 days to turn anterior foregut into lung organoid. This part of the experiment is done in Matrigel so that the tissue can maintain its three-dimensional pattern.

Differentiation of PSCs into definitive endoderm

Differentiation into definitive endoderm was carried out as previously described (D’Amour et al., 2005; Spence et al., 2011). Briefly, a 4-day Activin A (R&D systems, Minneapolis, MN) differentiation protocol was used. Cells were treated with Activin A (100 ng ml−1) for 3 consecutive days in RPMI 1640 media (Life Technologies, Grand Island, NY) with increasing concentrations of 0%, 0.2% and 2% HyClone defined fetal bovine serum (dFBS, Thermo Scientific, West Palm Beach, FL).

Differentiation of definitive endoderm into anterior foregut

After differentiation into definitive endoderm, foregut endoderm was differentiated, essentially as described (Green et al., 2011). Briefly, cells were incubated in foregut media: Advanced DMEM/F12 plus N-2 and B27 supplement, 10 mM Hepes, 1× L-Glutamine (200 mM), 1× Penicillin-streptomycin (5000 U/ml, all from Life Technologies) with 200 ng/ml Noggin (NOG, R&D Systems) and 10 µM SB431542 (SB, Stemgent, Cambridge, MA) for 4 days. For long term maintenance, cultures were maintain in ‘basal’ foregut media without NOG and SB, or in the presence of growth factors including 50, 500 ng/ml FGF2 (R&D systems), 10 µM Sant-2 (Stemgent), 10 µM SU5402 (SU, Stemgent), 100 ng/ml SHH (R&D systems), and SAG (Enzo Life Sciences, Farmingdale, NY) for 8 days.

Directed differentiation into anterior foregut spheroids and lung organoids

After differentiation into definitive endoderm, cells were incubated in foregut media with NOG, SB, 500 ng/ml FGF4 (R&D Systems), and 2 µM CHIR99021 (Chiron, Stemgent) for 4–6 days. After 4 days with treatment of growth factors, three-dimensional floating spheroids were present in the culture. Three-dimensional spheroids were transferred into Matrigel to support 3D growth as previously described (McCracken et al., 2011). Briefly, spheroids were embedded in a droplet of Matrigel (BD Bioscience #356237) in one well of a 24 well plate, and incubated at room temperature for 10 min. After the Matrigel solidified, foregut media with 1% Fetal bovine serum (FBS, CAT#: 16000–044, Life Technologies) or other growth factors and small molecules were overlaid and replaced every 4 days. Organoids were transferred into new Matrigel droplets every 10–15 days.

To procedure is similar for other organs except for the series of enzymes being selected. If the steps appear too mysterious, do not worry. We will cover the scientific details in a future post.


What are the medical potentials of the organoid technology?

One can take tissue from any person, turn it back into pluripotent stem cells (iPSC) and then grow that person’s organs from the iPSC. That means it is possible to check whether a drug will work or have bad effect on one person’s brain or kidney before applying the drug on the person himself. That marks the arrival of truly personalized medicine and that too without the person !



To understand how it all started, you need to go back to stem cell pioneer Shinya Yamanaka.

Shinya Yamanaka discovered more than 40 years later, in 2006, how intact mature cells in mice could be reprogrammed to become immature stem cells. Surprisingly, by introducing only a few genes, he could reprogram mature cells to become pluripotent stem cells, i.e. immature cells that are able to develop into all types of cells in the body.

Continue reading here.

The Wall Will be Built and Mexico Will Pay for It

…to stop Americans from flooding their effective and way cheaper healthcare system (click on image to see better version).


Jewish Genetics – Forward Magazine Makes Elhaik’s Work Go Viral


A few weeks back, we wrote about Dr. Eran Elhaik’s new and interesting work on tracing the roots of Ashkenazi Jews in Northeastern Turkey.

New Paper by Elhaik Shows that Ashkenazi Jews Came from Northeastern Turkey

Elhaik relied on his previously published population genetics algorithm that could trace the ancestral origin of someone to within a village based on genomic data. That work was done in collaboration with National Geographic.

Nevertheless, Elhaik’s latest paper did not catch as much attention as it deserved, until the Forward magazine decided to trash it using the opinions of a bunch of ‘scholars’ with no population genetics background.

Don’t Buy the Junk Science That Says Yiddish Originated in Turkey

The core of their criticism is rather hilarious, because they elevated the peripheral part of Elhaik’s paper to ‘central evidence’, while downgrading his DNA evidence to ‘belief’ system !! In fact, the ignorance of genetics of their ‘scholars’ is downright scary, because they repeatedly argued how the results could change with the inclusion of Sephardic Jews. How can evidence derived from a person’s genomic data change with the inclusion of a completely different person?

Of the many dubious pieces of evidence presented to support Elhaik’s unorthodox theory is the fact that four villages in ancient Turkey once had names that sounded something like Ashkenaz. Later in the film Elhaik explains, without a trace of humor, that Yiddish-speakers talk “like Yoda” and he even mentions Yoda’s use of unusual terms like “Jedi” and “wookies” to demonstrate Yiddish’s linguistic properties. Dr. Elhaik, using a tool he designed called Geographic Population Structure, which he believes can pinpoint the origins of an ethnic group according to its DNA, published an article in the journal “Genome Biology and Evolution” in which he elucidates his theory.


Elhaik wrote a detailed criticism of the Forward article in his blog, which we post below.

Responding to the criticism for Das et al. (2016)

In Das et al. (2016), we applied the Geographic Population Structure (GPS) algorithm to the genomes of Yiddish and non-Yiddish speaking Ashkenazic Jews (and other Jewish and non-Jewish populations) to study the origin of their genomes. Since genetics, geography, and linguistics are well correlated we surmised that the origin of the DNA would point to the origin of the Yiddish language. Surprisingly, GPS traced 93% of the samples to northeastern Turkey where we found four villages whose names may be derived from the word Ashkenaz. By the proximity of this region to Slavic lands and combined with other historical and linguistic evidence our findings were in support of Prof. Wexler’s Slavic hypothesis rather than the dominant Rhineland hypothesis proposing a Germanic origin to Yiddish.


The study has been published two weeks ago and has been picked up by the media. It has been received nearly 100% positive coverage in over 100 media outlets and numerous blogs. Expectedly, we have also received a bit of criticism, some of it will be addressed here and some of it will be ignored because it is merely ad hominem, not science.

Genetic criticism

None has been received, however some people have voiced their concerns about the implications of our results to their potential relatedness to the ancient Judaeans. I have commented at length on this issue to the Israeli Globes (Hebrew). Briefly, our study did not focus on the origin of Jews or even all Ashkenazic Jews, but rather the origin of Yiddish using a third of the Ashkenazic Jewish community for which genomic data were available. Testing whether one is related to the ancient Judaeans, Jesus, Moses, or Muhammad requires actually sequencing the genomes of these people and meticulously comparing it with modern day genomes looking for shared biomarkers. As opposed to the latter three we actually have plenty of ancient Judaeans skeletons that no one has ever sequenced (and probably never will). Why the DNA of the ancient Judaeans has not being sequenced and settle the question of relatedness once and for all is a question that should be directed to Israeli archeologists. It is most unfortunate that the members of the general public have been mislead to believe (no doubt after paying a lot of money to DTC companies) that they are related to ancient figures without any shred of evidence. However, this is not something our study aims to prove or disprove.

Methodological criticism

In “Scholars Blast New Study Tracing Ashkenazi Jews to Khazars of Ancient Turkey” Mr. Liphshiz cites two academics who criticized on our study, or more precisely, criticized the press release: most blasting indeed. First is Prof. Sergio DellaPergola, a demographer who proposed that including Sephardic Jews would have changed our findings.

serious research would have factored in the glaring genetic similarity between Sephardim [sic] and Ashkenazim [sic], which mean Polish Jews are more genetically similar to Iraqi Jews than to a non-Jewish Pole.

First, let us start with a basic biology. Each person has unique DNA: studying the DNA of non-Ashkenazic Jews would not change the DNA of Ashkenazic Jews nor the predicted origin of their DNA (i.e., “ancient Ashkenaz” in northeastern Turkey). GPS is an unbiased algorithm, that is, including or excluding other samples does not change the results for the test samples. GPS also cannot relocate the villages bearing the name of Ashkenaz to Germany.

Second, Iraqi and Iranian Jews are extremely similar, and the latter were indeed included in the study. The genetic similarity between Ashkenazic Jews and Iranian Jews was explained by their shared Iranian-Turkic past.

Had Prof. DellaPergola bothered to read our study rather than rely on the above figure, which was produced for the press and for simplicity included only Ashkenazic Jews, he would have found that we have analyzed Sephardic Jews (The yellow and pink triangles below in Figure 4 from Das et al. 2016 correspond to Iranian and Mountain Jews considered “Sephardic Jews”).


Third, to date, only three biogeographical analyses were carried out for Ashkenazic Jews. The first was done by Elhaik (2013) who mapped Ashkenazic Jews to western Turkey, ~100km away from “ancient Ashkenaz” and included only Ashkenazic Eastern European Jews. The second was done by Behar et al. (2013), who included both Ashkenazic and Sephardic Jews and mapped Ashkenazic Jews to Eastern Turkey, ~600km away from “ancient Ashkenaz.” Using a large dataset that include mostly Ashkenazic Jews and some Sephardic Jews and a more accurate algorithm, the third study by Das et al. (2016) discovered “ancient Ashkenaz.” In summary, all three studies pointed to Turkey. Interestingly, Behar et al. (2013) interpreted their results in favor of a Middle Eastern (Israelite) origin, although it is unsupported by their data. The inclusion of Sephardic samples did not change the Turkish geo location for the latter two studies.

Unlike Prof. DellaPergola, historian Prof. Shaul Stampfer, went to an even greater length enlightening us with his well supported criticism of our study:

It is basically nonsense

Behind the scenes, Prof. Stampfer is harassing people involved in the preparation of my papers alleging that population description was omitted from Elhaik (2013). Unfortunately, this is another case of an academic who may consider reading papers a luxury. A brief glance at the method section and yes, most unfortunately, the supplementary materials, would have revealed a very detailed description of all population and individuals studied. Unless capable of reading and understanding genomic studies, Prof. Stampfer may be advised to focus on the controversy over Shechita between Hasidim and Mitnagdim, a field which earned him the respect of his peers.

Linguistic criticism

The most interesting criticism, in my opinion, is the one that focuses on the linguistic aspects of Yiddish. It is worth clarifying that we did not carry out a linguistic but rather a genomic study that yielded biogeographical predictions, which were interpreted in favour of the Slavic theory over the Rhineland theory.

Mr. Dovid Katz said to The Jewish Chronicle that:

There is not a single word or sound in Yiddish that comes from Iranian or Turkish, and older Western Yiddish thrived before there was a single Slavic-derived word in the language.

The following response was provided by Prof. Paul Wexler, co-author of Das et al. (2016):

Unfortunately, I am unable to reply in the most appropriate manner, which would require publishing my collection of about 800 pages (single-spaced) of Iranian, Turkic, Tocharian, Mongolian and Chinese influences in Yiddish–both in the latter’s western and eastern variants. Mr. Katz’s claim that there are no Oriental elements in Yiddish and that even Slavic elements do not enter Western Yiddish until centuries after the alleged rise of the Yiddish language in the German lands, is totally false. Mr. Katz’s remarks are more of an emotional tirade than a scholarly statement. Apparently, Mr. Katz did not read (or did read but failed to understand) my recent articles on the subject of Iranian and Turkic elements, or my many books and articles dating from the 1990s showing the nature of the “Slavicity” of Yiddish. Indeed, to the best of my knowledge, Katz has no knowledge of Slavic, Iranian, Turkic, etc. so it would be nearly impossible for him to evaluate my writings and examples (which I hope soon to publish in a book form). Mr. Katz also does not appear to know about the pre- and post-World War II writings of Max Weinreich, the doyen of the field of modern Yiddish linguistics (1894-1969), whom I had the distinct pleasure of meeting personally. Weinreich wrote that German Yiddish from its earliest stages (10th c) was in contact with Sorbian, and possibly also with Polabian, two of the many Slavic languages which reached what is now modern Germany (both west and east) in the 7th century. Today, Polabian is extinct (since the late 18th c) but Sorbian survives in two variants in eastern Germany. To claim that there are no Iranian elements in Yiddish is very puzzling.

Surely, Mr. Katz knows that the Babylonian Talmud was edited in its final form by the 6th century, in a territory that was part and parcel of the Iranian Empire. All Talmudic scholars could tell him of the immense Iranian philosophical, religious, legal and linguistic imprint on the Talmud. Many of the Iranianisms in Yiddish passed via Judeo-Aramaic (the language of the Talmud) into written Hebrew and from there into the later Jewish languages of varying stocks. My claim goes further: namely, that Jews were intimately acquainted with Iranian dialects and could not avoid, or did not ever wish to avoid, importing hundreds and maybe even thousands of covert and overt Iranianianims into Yiddish, etc.

The existence of Iranianisms in Yiddish was known also to scholars of Iranian and Slavic. The very distinguished general and Slavic linguist, Roman Jakobson, wrote in the 1960s that Yiddish paskudnik, paskudnjak ‘scoundrel’, though it bore a very close similarity to the coterritorial Slavic languages, was ultimately of Iranian origin (it also appears in the Talmud). Under Ukrainian influence, the Yiddish Iranianism adopted non-Jewish Slavic form–except that -njak in the second variant, though theoretically possible in Ukrainian, is unattested. The Yiddish word has also been discussed in the Iranian linguistic literature. Is this “off-the-wall linguistics”, to quote Katz?

Finally, Mr. Katz seems to think that the term “Ashkenaz” has always been associated with the German lands, the alleged homeland for him of the North European Jews. I would advise Mr. Katz to read the writings of Sa’adya Gaon, the 10th century scholar from Baghdad who understood that Ashkenaz meant “Slavic” (he is not the only writer of that age who thought so). I would advise Mr. Katz to read the Caucasian ethnographic literature of the 1920s where the term Ashkenaz was still being used by local peoples, both Jewish and non-Jewish. I would advise Mr. Katz to have a look at the writings of Wilhelm Bacher, the eminent Iranianist, writing (around the turn of the 20th century) about a 14th-century Uzbek Jew who composed the first extant Hebrew-Persian dictionary and who described himself (a native of Urgench) as “Ashkenazic”.

I am very grieved by Mr. Katz’s ignorance, since I have known him since the 1980s. In those days he wrote several ground-breaking articles on Yiddish which earned him the respect and admiration of all Yiddishists. Unfortunately, he did not live up to his promise.

Das et al. study was published in GBE.

Epigenetics Debacle – Do You Feel Sad for Sid Mukherjee?

Over the weekend, Pulitzer Prize winning writer Sid Mukherjee published an article ‘Same but Different – How epigenetics can blur the line between nature and nurture‘ in the New Yorker magazine.


On October 6, 1942, my mother was born twice in Delhi. Bulu, her identical twin, came first, placid and beautiful. My mother, Tulu, emerged several minutes later, squirming and squalling. The midwife must have known enough about infants to recognize that the beautiful are often the damned: the quiet twin, on the edge of listlessness, was severely undernourished and had to be swaddled in blankets and revived.

The article received widespread criticism for turning black into white, which happens to be a common practice in the epigenetics field. For a detailed summary of criticisms by various scientists, readers may check the following two blog posts of evolutionary biologist Jerry Coyne. A couple of examples are included from many.

The New Yorker screws up big time with science: researchers criticize the Mukherjee piece on epigenetics

Researchers criticize the Mukherjee piece on epigenetics: Part 2

Wally Gilbert, Nobel Laureate, biochemist and molecular biologist, Harvard University (retired).

The New Yorker article is so wildly wrong that it defies rational analysis. Too much of the “epigenetic” discussion is wishful thinking seeking Lamarckian effects, and ignoring the role of sequence specific regulatory proteins and genes. (as well as sequence specific RNA molecules).


Sidney Altman, Sterling Professor of Molecular, Cellular, and Developmental Biology and Chemistry at Yale University, Nobel Laureate:

I am not aware that there is such a thing as an epigenetic code. It is unfortunate to inflict this article, without proper scientific review, on the audience of The New Yorker.


Readers may note that the real criticism should be directed to NHGRI with its funding of unscientific ‘big science’ projects on epigenetics and epigenomic markers, and unscrupulous scientists technicians like Eric Lander, Manolis Kellis, Eric Green and several others, who benefit from this largess. In fact, many criticisms of Sid Mukherjee’s article are similar to what we had been writing over the years regarding various epigenome papers and projects.

In “The Conspiracy of Epigenome?“, we wrote –

We would have had to scratch our heads less, if Lander used ‘conspiracy of epigenome’ instead. Nothing managed to derail this expensive boondoggle over the last four years, including powerful critics of the scientific principle behind it, fraud allegation against the leader, public humiliation of its sister project ENCODE, NIH cost-cutting and protest of the scientists, and so on. What appears even more puzzling is that despite all those prior events, the ‘leaders’ of the epigenome project ended up making the same mistakes as ENCODE. Don’t these clowns learn anything?


In NHGRI’s Epigenetics Investments Starting to Pay Off, we highlighted several other epigenetics-related bogus discoveries press-releases from NIH-funded research.


In Epigenetics in Stem Cells – Is It a Significant Paradigm Shift in Biology?, we wrote –

Nobody gets more angry about reference to ‘epiginetics’ than famous biologist Mark Ptashne. The reason is simple. ‘Epigenetics’ is not as harmless as ‘paradigm shift’. In addition to using bad word, it also introduces bad scientific concepts as explained by Dr. Ptashne.

Faddish Stuff: Epigenetics and the Inheritance of Acquired Characteristics

Ptashne’s battle against epigenetics is not new. Few years back, he, Oliver Hobert and Eric Davidson rightly questioned about the wisdom of backing large epigenome projects.


In ‘Epigenetics’ Explained, in the Context of Descendants of Holocaust and Slavery Victims, we explained –

The word ‘epigenetics’ was introduced by biologist C. H. Waddington in 1940s to explain how the cells maintained their states during development.

For example, the muscle cells, once differentiated, continue to divide into muscle cells and the same is true for other kinds of cells, even though they all start from one universal mother cell and carry the same chromosomes after division. How does a cell get programmed into being a muscle cell or neuronal cell? Waddington speculated that there must be some ‘epigenetic’ effect, because all cells continue to have the same chromosomes and therefore the same set of genes.

Please note that in 1942, there was no DNA sequence and Waddington was trying to explain things in abstract terms. In the following 60+ years, Waddington’s question was fully answered by geneticists and developmental biologists. It is found that the cells remember their states (muscle or neuron or kidney, etc.) through the actions of a set of genes called transcription factors acting on other genes through binding sites on the chromosomes. Both Professor Ptashne and Professor Davidson’s labs did extensive research to elucidate the mechanisms, and Waddington’s ‘epigenetics’ was explained through their work.


Eric Davidson’s book

One major criticism of Mukherjee’s article is on his claim that Yamanaka’s stem cell work was a proof of ‘epigenetic regulation’, when it was the exact opposite. Readers interested in learning about the real science behind how genomes and Yamanaka’s work connect together should start with Eric Davidson’s book previously discussed in our blog –

A Must Read Book, If You Like to Understand Genomes


Lastly, maybe it is time for Sid Mukherjee to pick up a real cause that would save other writers like him from future embarrassment.

Let’s Discuss – Is it Time to Shut Down NHGRI?

Web Analytics