Category Archives: Science & Tech

How some Indian hospitals are cutting cancer drug costs

How some Indian hospitals are cutting cancer drug costs

People waiting at Cachar Cancer Centre in Assam, India.
People waiting at Cachar Cancer Centre in Assam, India

Scores of patients quietly fill a modest tin shed which serves as a waiting area at a cancer hospital in Silchar, in north-eastern India.

Over the last few months, Cachar Cancer Centre in the state of Assam has seen an unusually high number of patients from nearby towns and villages.

The reason: a quiet revolution that is making cancer drugs more affordable.

The hospital is part of the National Cancer Grid, a group of treatment centres that have clubbed together to bulk buy drugs and bring down costs by more than 85%.

It is a modest start but, literally, a lifesaver for some of the country’s poorest people.

Expensive, protracted treatments often put families under immense financial strain or are simply out of reach.

For example, breast cancer treatments can extend for over 10 cycles and cost more than $6,000 (£4,719). In a country where the average monthly salary is less than $700, that is beyond many household budgets.

Cancer patient Baby Nandi.
Baby Nandi has been receiving chemotherapy to treat breast cancer

Baby Nandi, 58, is waiting for her next chemotherapy session at the Cachar hospital clinic. Previously, she had to travel 2,000km (1,242 miles) for breast cancer treatment. The drugs alone cost $650 for one treatment cycle. She needed six cycles. Along with the travel and accommodation costs, her family’s finances were pushed to the brink.

Thanks to the new initiative, those drugs are now available in her home city, Silchar, at a third of the cost.

Baby’s husband Narayan Nandi said: “We don’t have so much money at a go. I had to sell land and borrow from my relatives to take her to Chennai. At least now we can afford her full treatment and be home.”

Nearly two million cancer cases are reported a year in India, but consultancy firm EY says that the actual figure could be up to three times as high.

Most people in India have to pay for healthcare themselves. Even for those with insurance or on government schemes, cancer care costs are often not fully covered.

Amal Chandra, the owner of a small shop in rural Assam, knows the problem well. Last year his wife’s government health card, which covered $1,800 of health expenses, expired midway through her breast cancer treatment. “I had to borrow $250 to pay for her remaining chemotherapy injections,” he told the BBC.

Amal and his wife are now back at the hospital as her cancer has returned but at least now the whole cost of her treatment is covered after the prices of drugs was brought down.

Oncologist Dr Ravi Kannan, who leads the Cachar Cancer Hospital's operations.
Oncologist Dr Ravi Kannan leads the Cachar Cancer Hospital’s operations

A major issue is that most of India’s cancer patients live in towns and rural areas, while the bulk of healthcare resources are in larger cities. This means that patients, like Mrs Nandi, and their families face the added burden of having to travel long distances to access treatment.

Healthcare experts say that getting cancer drugs to these parts of the country is one of the healthcare system’s biggest problems. Cachar Cancer Hospital, the only facility of its kind in India’s North-eastern hills, is trying to meet that challenge.

It treats 5,000 new patients a year and manages the ongoing treatment of another 25,000 people, who are mainly low-paid workers unable to afford the cost of cancer treatment and travel.

The intense pressure this puts on the not-for-profit organisation’s funding means it faces a budget deficit of more than $20,000 a month.

Oncologist Dr Ravi Kannan, who leads the hospital’s operations, told the BBC that the initiative to cut cancer drug prices has helped him to buy quality medicines and treat more patients for free.

It has also helped hospitals in smaller towns avoid another serious problem – running out of cancer drugs. Previously, drug supplies outside large urban centres were erratic due to the low numbers of patients and limited funds.

“Now smaller hospitals don’t have to get into the negotiation table at all. The price is already decided and comes with a commitment to supply to all hospitals at par,” Dr Kannan said.

A woman collects a prescription.
A woman collects a prescription

The initiative to bulk-buy drugs is led by the country’s largest cancer centre, Tata Memorial Hospital (TMH) in Mumbai. The initial list had 40 common off-patent generic drugs, covering 80% of their pharmacy costs, saving the group $170m.

The success of the scheme has attracted interest from hospitals and state governments across the country.

The next round will expand to over 100 drugs, while broader cancer care purchases like supplies, diagnostics and equipment are also being considered. However, more expensive patented treatments are currently not part of the plan.

“I think what pharmaceutical companies need to understand is in a market like India, unless you bring costs down, you’re not going to get the volumes and it’s a chicken and the egg phenomenon,” according to Dr C S Pramesh, Director of TMH and the Convenor of National Cancer Grid.

Dr Pramesh also says that with around 70% of global cancer deaths projected to be in lower and middle income countries, like India, initiatives similar to the National Cancer Grid could be key to helping patients around the world.



Aditya-L1: India’s Sun mission set to reach destination in hours

Aditya-L1: India’s Sun mission set to reach destination in hours

Aditya-L1 lifted off from the launch pad at Sriharikota on Saturday morning

India’s first solar observation mission is set to reach its final destination in a few hours.

On Saturday, the space agency Isro will attempt to place Aditya-L1 in a spot in space from where it will be able to continuously watch the Sun.

The spacecraft has been travelling towards the Sun for four months since lift-off on 2 September.

It was launched just days after India made history by becoming the first to land near the Moon’s south pole.

India’s first space-based mission to study the solar system’s biggest object is named after Surya – the Hindu god of the Sun, who is also known as Aditya. And L1 stands for Lagrange point 1 – the exact place between the Sun and Earth where the spacecraft is heading.

According to the European Space Agency, a Lagrange point is a spot where the gravitational forces of two large objects – such as the Sun and the Earth – cancel each other out, allowing a spacecraft to “hover”.

L1 is located 1.5 million km (932,000 miles) from the Earth, which is 1% of the Earth-Sun distance. Isro recently said that the spacecraft had already covered most of the distance to its destination.

An Isro official told the BBC that “a final maneuver” will be performed on Saturday at around 16:00 India time (10:30 GMT) to place Aditya in L1’s orbit.

Isro chief S Somanath has said they will trap the craft in orbit and will occasionally need to do more maneuvers to keep it in place.

Once Aditya-L1 reaches this “parking spot” it will be able to orbit the Sun at the same rate as the Earth. From this vantage point it will be able to watch the Sun constantly, even during eclipses and occultations, and carry out scientific studies.

Aditya-L1's trajectory
Presentational white space

The orbiter carries seven scientific instruments which will observe and study the solar corona (the outermost layer); the photosphere (the Sun’s surface or the part we see from the Earth) and the chromosphere (a thin layer of plasma that lies between the photosphere and the corona).

After lift-off on 2 September, the spacecraft went four times around the Earth before escaping the sphere of Earth’s influence on 30 September. In early October, Isro said they had done a slight correction to its trajectory to ensure it was on its intended path towards the final destination.

The agency says some of the instruments on board have already started work, gathering data and taking images.

Just days after lift-off, Isro shared the first images sent by the mission – one showed the Earth and the Moon in one frame and the second was a “selfie” that showed two of its scientific instruments.

And last month the agency released the first-ever full-disk images of the Sun in wavelengths ranging from 200 to 400 nanometers, saying they provided “insights into the intricate details of the Sun’s photosphere and chromosphere”.

Scientists say the mission will help them understand solar activity, such as the solar wind and solar flares, and their effect on Earth and near-space weather in real time.

The radiation, heat and flow of particles and magnetic fields of the Sun constantly influence the Earth’s weather. They also impact the space weather where nearly 7,800 satellites, including more than 50 from India, are stationed.

the agency released the first-ever full-disk images of the Sun in wavelengths ranging from 200 to 400 nanometre, saying they provided "insights into the intricate details of the Sun's photosphere and chromosphere".
Presentational white space

Scientists say Aditya can help better understand, and even give a forewarning, about solar winds or eruptions a couple of days ahead, which will help India and other countries move satellites out of harm’s way.

Isro has not given details of the mission’s cost, but reports in the Indian press have put it at 3.78bn rupees ($46m; £36m).

If Saturday’s maneuver is successful, India will join a select group of countries that are already studying the Sun.

The US space agency Nasa has been watching the Sun since the 1960s; Japan launched its first solar mission in 1981 and the European Space Agency (ESA) has been observing the Sun since the 1990s.

In February 2020, Nasa and ESA jointly launched a Solar Orbiter that is studying the Sun from close quarters and gathering data that, scientists say, will help understand what drives its dynamic behavior.

And in 2021, Nasa’s newest spacecraft Parker Solar Probe made history by becoming the first to fly through the corona, the outer atmosphere of the Sun.



Amazing images of Universe

Amazing images from James Webb telescope, two years after launch

The James Webb Space Telescope (JWST) was launched to orbit just two years ago, but already it’s starting to redefine our view of the early Universe.

An image showing expanding shells of debris from Cas A, an exploded star

CASSIOPEIA A The expanding shells of debris from Cas A, an exploded star, or supernova. The main ring is about 15 light-years across.

Jupiter, the largest planet in the Solar System

JUPITER The largest planet in the Solar System, Jupiter, viewed in infrared light. The brightest features are at the highest altitudes – the tops of convective storm clouds.

Without really stretching its capabilities, the infrared observatory has been peering deep into the cosmos to show us galaxies of stars as they were up to 13.5 billion years ago.

A lot of them are brighter, more massive, and more mature than many scientists thought possible so soon after the Big Bang, which occurred 13.8 billion years ago.

“We certainly thought we’d be seeing fuzzy blobs of stars. But we’re also seeing fully formed galaxies with perfect spiral arms,” said Prof Gillian Wright.

“Theorists are already working on how you get those mature structures so early in the Universe. In that sense, Webb is really changing scientific thinking,” the director of the UK Astronomy Technology Centre told BBC News.

An image showing M51, the Whirlpool Galaxy

M51 The Whirlpool Galaxy can be seen in the night sky with just binoculars. Here, the most powerful space telescope ever launched uses its incredible capabilities to study the intricate spiral arms.

The Chameleon I molecular cloud

CHAMELEON I The Chameleon I molecular cloud is about 630 light-years from Earth. It’s here, at temperatures down to about -260C, that Webb has detected types of ice grains not previously observed.

An image showing Sagittarius C

SAGITTARIUS C Webb looks to the centre of our galaxy, close to where a supermassive black hole exists. This region of space contains tens of thousands of stars, including many that are birthing inside the bright pink feature at centre-left. The cyan colour highlights excited hydrogen gas.

And it’s not just the efficiency with which these early galaxies were able to form their stars that’s been a surprise, the size of their central black holes has been a marvel, too.

There’s a monster at the core of our Milky Way that’s four billion times the mass of our Sun. One theory suggests such behemoths are made over time by accreting lots of smaller holes produced as remnants from exploded stars, or supernovae.

“But the preliminary evidence from JWST is that some of these early giants may have completely bypassed the star stage,” said Dr Adam Carnall from the University of Edinburgh.

“There is a scenario where huge clouds of gas in the early Universe could have collapsed violently and just kept going, straight to being black holes.”

NGC 3256, the result of two galaxies crashing into each other

NGC 3256 This is what you get when two galaxies crash into each other, an event estimated to have occurred about 500 million years ago. The collision drives the formation of new stars that then illuminate the gas and dust around them.

The famous supernova remnant first recoded by Chinese astronomers in 1054

CRAB NEBULA The famous supernova remnant first recorded by Chinese astronomers in 1054. It’s located some 6,500 light-years from Earth in the constellation Taurus.

When James Webb was launched at Christmas 2021, it was thought it might have 10 years of operations ahead of it. The telescope needs its own fuel to maintain station 1.5 million km from Earth. But the flight to orbit on an Ariane rocket was so accurate, it’s estimated now to have fuel reserves for 20 years of life, if not longer.

This means, rather than racing through their observations, astronomers can afford take a more strategic approach to the telescope’s work.

“We thought we’d be skimming cream; we no longer need to do that,” said Dr Eric Smith, Webb’s programme scientist at the US space agency Nasa.

One activity that’s sure to accelerate is the practice of making “deep fields”. These are long stares at particular patches of sky that will allow the telescope to trace the light from the faintest and most distant galaxies. It’s how Webb is likely to spot the very first galaxies and possibly even some of the very first stars to shine in the Universe.

The famed ringed planet of Saturn

SATURN The famed ringed planet appears quite dark to Webb in this image because methane in the planet absorbs infrared light strongly. Three of Saturn’s moon are seen to the left.

A baby star launches energetic jets from both poles

HH212 A baby star, no more than 50,000 years old, launches energetic jets from both poles that light up the molecular hydrogen in pink. The entire structure is 1.6 light-years across.

The Hubble telescope famously expended many days just looking at a single corner of the cosmos. “I don’t think we’ll need the hundreds of hours of exposure that Hubble used, but I do think we’ll need multiple deep fields,” said Dr Emma Curtis-Lake from the University of Hertfordshire.

“We’ve already had some quite long exposures with JWST and we’re seeing quite a lot of variation. So, we can’t put everything into one teeny-tiny area because there’s no guarantee we’ll find something super-exciting.”

JADES-GS-z13-0, the earliest confirmed galaxy

JADES The JWST Advanced Deep Extragalactic Survey, otherwise known as Jades, has the earliest confirmed galaxy, called JADES-GS-z13-0, which is observed at just 325 million years after the Big Bang.

Star Cluster IC 348

STAR CLUSTER IC 348 Wispy filaments of gas and dust stream between a cluster of bright stars. Webb found the lowest mass brown dwarf, or “failed star”, in this image – an object about three to four times the mass of our Jupiter.

The Space Telescope Science Institute’s Dr Massimo Stiavelli dreams of spotting a star that is primordial – that has the signature of the original chemistry that emerged from the Big Bang; that hasn’t been polluted with elements that were forged only later in cosmic history.

“We’ll need to see them as supernovae, when they explode,” the head of the Webb mission office said.

“To achieve this, we need to start looking at the same patches year after year to catch them before and just after they go off. They’ll be extremely rare and we’ll need to be very lucky.”

Earendel, the most distant single star ever observed

EARENDEL The most distant single star observed to date is called Earendel. James Webb confirmed its light has taken 12.9 billion years to reach us. Its light has been boosted by the gravity of foreground galaxies.

The famous Orion Nebula star forming region

ORION NEBULA The famous star forming region can just about be seen by the naked eye as a smudge on the sky. It would take a spaceship travelling at light-speed a little over four years to traverse this Webb scene.

Marvel at the extraordinary collection of James Webb pictures on this page – from the most distant reaches of the Universe to the nearby familiar objects in our own Solar System.

It’s amazing to think that imaging isn’t actually the telescope’s majority workload.

More than 70% of its time is spent doing spectroscopy. That’s sampling the light from objects and slicing it up into its “rainbow” colours. It’s how you retrieve key information about the chemistry, temperature, density and velocity of the targets under study.

“You could think of Webb as a giant spectrograph that takes the occasional nice picture,” joked Dr Smith.

The cloud complex Rho Ophiuchi

RHO OPHIUCHI This cloud complex is the nearest star forming region to Earth, being just 400 light-years away. The star lighting up the white cavity is just a few million years old.


Ancient and religious calendar systems

Ancient and religious calendar systems

The Near East and the Middle East

The lunisolar calendar, in which months are lunar but years are solar—that is, are brought into line with the course of the Sun—was used in the early civilizations of the whole Middle East, except Egypt, and in Greece. The formula was probably invented in Mesopotamia in the 3rd millennium BCE. Study of cuneiform tablets found in this region facilitates tracing the development of time reckoning back to the 27th century BCE, near the invention of writing. The evidence shows that the calendar is a contrivance for dividing the flow of time into units that suit society’s current needs. Though calendar makers put to use time signs offered by nature—the Moon’s phases, for example—they rearranged reality to make it fit society’s constructions.

Babylonian calendars

In Mesopotamia the solar year was divided into two seasons, the “summer,” which included the barley harvest in the second half of May or in the beginning of June, and the “winter,” which roughly corresponded to today’s fall–winter. Three seasons (Assyria) and four seasons (Anatolia) were counted in northerly countries, but in Mesopotamia the bipartition of the year seemed natural. As late as about 1800 BCE the prognoses for the welfare of the city of Mari, on the middle Euphrates, were taken for six months.

The months began at the first visibility of the New Moon, and in the 8th century BCE court astronomers still reported this important observation to the Assyrian kings. The names of the months differed from city to city, and within the same Sumerian city of Babylonia a month could have several names, derived from festivals, from tasks (e.g., sheepshearing) usually performed in the given month, and so on, according to local needs. On the other hand, as early as the 27th century BCE, the Sumerians had used artificial time units in referring to the tenure of some high official—e.g., on N-day of the turn of office of PN, governor. The Sumerian administration also needed a time unit comprising the whole agricultural cycle; for example, from the delivery of new barley and the settling of pertinent accounts to the next crop. This financial year began about two months after barley cutting. For other purposes, a year began before or with the harvest. This fluctuating and discontinuous year was not precise enough for the meticulous accounting of Sumerian scribes, who by 2400 BCE already used the schematic year of 30 × 12 = 360 days.

At about the same time, the idea of a royal year took precise shape, beginning probably at the time of barley harvest, when the king celebrated the new (agricultural) year by offering first fruits to gods in expectation of their blessings for the year. When, in the course of this year, some royal exploit (conquest, temple building, and so on) demonstrated that the fates had been fixed favourably by the celestial powers, the year was named accordingly; for example, as the year in which “the temple of Ningirsu was built.” Until the naming, a year was described as that “following the year named (after such and such event).” The use of the date formulas was supplanted in Babylonia by the counting of regnal years in the 17th century BCE.

The use of lunar reckoning began to prevail in the 21st century BCE. The lunar year probably owed its success to economic progress. A barley loan could be measured out to the lender at the next year’s threshing floor. The wider use of silver as the standard of value demanded more flexible payment terms. A man hiring a servant in the lunar month of Kislimu for a year knew that the engagement would end at the return of the same month, without counting days or periods of office between two dates. At the city of Mari about 1800 BCE, the allocations were already reckoned on the basis of 29- and 30-day lunar months. In the 18th century BCE the Babylonian empire standardized the year by adopting the lunar calendar of the Sumerian sacred city of Nippur. The power and the cultural prestige of Babylon assured the success of the lunar year, which began on Nisanu 1, in the spring. When in the 17th century BCE the dating by regnal years became usual, the period between the accession day and the next Nisanu 1 was described as “the beginning of the kingship of PN,” and the regnal years were counted from this Nisanu 1.

It was necessary for the lunar year of about 354 days to be brought into line with the solar (agricultural) year of approximately 365 days. This was accomplished by the use of an intercalated month. Thus, in the 21st century BCE a special name for the intercalated month iti dirig appears in the sources. The intercalation was operated haphazardly, according to real or imagined needs, and each Sumerian city inserted months at will—e.g., 11 months in 18 years or two months in the same year. Later the empires centralized the intercalation, and as late as 541 BCE it was proclaimed by royal fiat. Improvements in astronomical knowledge eventually made possible the regularization of intercalation, and, under the Persian kings (c. 380 BCE), Babylonian calendar calculators succeeded in computing an almost perfect equivalence in a lunisolar cycle of 19 years and 235 months with intercalations in the years 3, 6, 8, 11, 14, 17, and 19 of the cycle. New Year’s Day (Nisanu 1) now oscillated around the spring equinox within a period of 27 days.

The Babylonian month names were Nisanu, Ayaru, Simanu, Duʾuzu, Abu, Ululu, Tashritu, Arakhsamna, Kislimu, Tebetu, Shabatu, Adaru. The month Adaru II was intercalated six times within the 19-year cycle but never in the year that was 17th of the cycle, when Ululu II was inserted. Thus, the Babylonian calendar until the end preserved a vestige of the original bipartition of the natural year into two seasons, just as the Babylonian months to the end remained truly lunar and began when the New Moon was first visible in the evening. The day began at sunset. Sundials and water clocks (clepsydra) served to count hours.

The influence of the Babylonian calendar was seen in many continued customs and usages of its neighbour and vassal states long after the Babylonian empire had been succeeded by others. In particular, the Jewish calendar in use at relatively late dates employed similar systems of intercalation of months, month names, and other details (see below The Jewish calendar). The Jewish adoption of Babylonian calendar customs dates from the period of the Babylonian Exile in the 6th century BCE.

Other calendars used in the ancient Near East

The Assyrians and the Hittites

Of the calendars of other peoples of the ancient Near East, very little is known. Thus, though the names of all or of some months are known, their order is not. The months were probably everywhere lunar, but evidence for intercalation is often lacking; for instance, in Assyria. For accounting, the Assyrians also used a kind of week, of five days, as it seems, identified by the name of an eponymous official. Thus, a loan could be made and interest calculated for a number of weeks in advance and independently of the vagaries of the civil year. In the city of Ashur, the years bore the name of the official elected for the year; his eponym was known as the limmu. As late as about 1070 BCE, his installation date was not fixed in the calendar. From about 1100 BCE, however, Babylonian month names began to supplant Assyrian names, and, when Assyria became a world power, it used the Babylonian lunisolar calendar.

The calendar of the Hittite empire is known even less well. As in Babylonia, the first Hittite month was that of first fruits, and, on its beginning, the gods determined the fates.


At about the time of the conquest of Babylonia in 539 BCE, Persian kings made the Babylonian cyclic calendar standard throughout the Persian empire, from the Indus to the Nile. Aramaic documents from Persian Egypt, for instance, bear Babylonian dates besides the Egyptian. Similarly, the royal years were reckoned in Babylonian style, from Nisanu 1. It is probable, however, that at the court itself the counting of regnal years began with the accession day. The Seleucids and, afterward, the Parthian rulers of Iran maintained the Babylonian calendar. The fiscal administration in northern Iran, from the 1st century BCE, at least, used Zoroastrian month and day names in documents in Pahlavi (the Iranian language of Sāsānian Persia). The origin and history of the Zoroastrian calendar year of 12 months of 30 days, plus five days (that is, 365 days), remain unknown. It became official under the Sāsānian dynasty, from about 226 CE until the Arab conquest in 621. The Arabs introduced the Muslim lunar year, but the Persians continued to use the Sāsānian solar year, which in 1079 was made equal to the Julian year by the introduction of the leap year.

The Egyptian calendar

The ancient Egyptians originally employed a calendar based upon the Moon, and, like many peoples throughout the world, they regulated their lunar calendar by means of the guidance of a sidereal calendar. They used the seasonal appearance of the star Sirius (Sothis); this corresponded closely to the true solar year, being only 12 minutes shorter. Certain difficulties arose, however, because of the inherent incompatibility of lunar and solar years. To solve this problem the Egyptians invented a schematized civil year of 365 days divided into three seasons, each of which consisted of four months of 30 days each. To complete the year, five intercalary days were added at its end, so that the 12 months were equal to 360 days plus five extra days. This civil calendar was derived from the lunar calendar (using months) and the agricultural, or Nile, fluctuations (using seasons); it was, however, no longer directly connected to either and thus was not controlled by them. The civil calendar served government and administration, while the lunar calendar continued to regulate religious affairs and everyday life.

In time, the discrepancy between the civil calendar and the older lunar structure became obvious. Because the lunar calendar was controlled by the rising of Sirius, its months would correspond to the same season each year, while the civil calendar would move through the seasons because the civil year was about one-fourth day shorter than the solar year. Hence, every four years it would fall behind the solar year by one day, and after 1,460 years it would again agree with the lunisolar calendar. Such a period of time is called a Sothic cycle.

Because of the discrepancy between these two calendars, the Egyptians established a second lunar calendar based upon the civil year and not, as the older one had been, upon the sighting of Sirius. It was schematic and artificial, and its purpose was to determine religious celebrations and duties. In order to keep it in general agreement with the civil year, a month was intercalated every time the first day of the lunar year came before the first day of the civil year; later a 25-year cycle of intercalation was introduced. The original lunar calendar, however, was not abandoned but was retained primarily for agriculture because of its agreement with the seasons. Thus, the ancient Egyptians operated with three calendars, each for a different purpose.

The only unit of time that was larger than a year was the reign of a king. The usual custom of dating by reign was “year 1, 2, 3,…of King So-and-So,” and with each new king the counting reverted back to year 1. King lists recorded consecutive rulers and the total years of their respective reigns.The civil year was divided into three seasons, commonly translated: Inundation, when the Nile overflowed the agricultural land; Going Forth, the time of planting when the Nile returned to its bed; and Deficiency, the time of low water and harvest.

The months of the civil calendar were numbered according to their respective seasons and were not listed by any particular name—e.g., third month of Inundation—but for religious purposes the months had names. How early these names were employed in the later lunar calendar is obscure.

The days in the civil calendar were also indicated by number and listed according to their respective months. Thus a full civil date would be: “Regnal year 1, fourth month of Inundation, day 5, under the majesty of King So-and-So.” In the lunar calendar, however, each day had a specific name, and from some of these names it can be seen that the four quarters or chief phases of the Moon were recognized, although the Egyptians did not use these quarters to divide the month into smaller segments, such as weeks. Unlike most people who used a lunar calendar, the Egyptians began their day with sunrise instead of sunset because they began their month, and consequently their day, by the disappearance of the old Moon just before dawn.

As was customary in early civilizations, the hours were unequal, daylight being divided into 12 parts, and the night likewise; the duration of these parts varied with the seasons. Both water clocks and sundials were constructed with notations to indicate the hours for the different months and seasons of the year. The standard hour of constant length was never employed in ancient Egypt.



What is generative AI, and why is it suddenly everywhere?

What is generative AI, and why is it suddenly everywhere?

Between ChatGPT and Stable Diffusion, AI suddenly feels mainstream.

Welcome to the age of generative AI, when it’s now possible for anyone to create new, original illustrations and text by simply sending a few instructions to a computer program. Several generative AI models, including ChatGPT and an image generator called Stable Diffusion, can now be accessed online for free or for a low-cost subscription, which means people across the world can do everything from assemble a children’s book to produce computer code in just a few clicks. This tech is impressive, and it can get pretty close to writing and illustrating how a human might. Don’t believe me? Here’s a Magic School Bus short story ChatGPT wrote about Ms. Frizzle’s class trip to the Fyre Festival. And below is an illustration I asked Stable Diffusion to create about a family celebrating Hanukkah on the moon.

Generative AI’s results aren’t always perfect, and we’re certainly not dealing with an all-powerful, super AI — at least for now. Sometimes its creations are flawed, inappropriate, or don’t totally make sense. If you were going to celebrate Hanukkah on the moon, after all, you probably wouldn’t depict giant Christmas ornaments strewn across the lunar surface. And you might find the original Magic School Bus stories more entertaining than my AI-generated one.

Still, even in its current form and with its current limitations, generative AI could automate some tasks humans do daily — like writing form emails or drafting simple legal contracts — and possibly make some kinds of jobs obsolete. This technology presents plenty of opportunities, but plenty of complex new challenges, too. Writing emails may suddenly have gotten a lot easier, for example, but catching cheating students has definitely gotten a lot harder.

It’s only the beginning of this tech, so it can be hard to make sense of what exactly it is capable of or how it could impact our lives. So we tried to answer a few of the biggest questions surrounding generative AI right now.

Wait, how does this AI work?

Very simply, a generative AI system is designed to produce something new based on its previous experience. Usually, this technology is developed with a technique called machine learning, which involves teaching an artificial intelligence to perform tasks by exposing it to lots and lots of data, which it “trains” on and eventually learns to mimic. ChatGPT, for example, was trained on an enormous quantity of text available on the internet, along with scripts of dialogue, so that it could imitate human conversations. Stable Diffusion is an image generator created by the startup Stability.AI that will produce an image for you based on text instructions, and was designed by feeding the AI images and their associated captions collected from the web, which allowed the AI to learn what it should “illustrate” based on the verbal commands it received.

While the particular approaches used to build generative AI models can differ, this technology is ultimately trying to reproduce human behavior, creating new content based on the content that humans have already created. In some ways, it’s like the smart compose features you see on your iPhone when you’re texting or your Gmail account when you’re typing out an email. “It learns to detect patterns in this content, which in turn allows it to generate similar but distinct content,” explains Vincent Conitzer, a computer science professor at Carnegie Mellon.

This method of building AI can be extremely powerful, but it also has real flaws. In one test, for example, an AI model called Galactica that Meta built to help write scientific papers suggested that the Soviet Union was the first country to put a bear in space, among several other errors and falsehoods. (The company pulled the system offline in November, after just a few days.) Lensa AI’s Magic Avatar feature, the AI portrait generator, sometimes illustrates people with additional limbs.

It also has the concerning tendency to depict women without any clothing.

It’s easy to find other biases and stereotypes built into this technology, too. When the Intercept asked ChatGPT to come up with an airline passenger screening system, the AI suggested higher risk scores for people from — or who had visited — Syria and Afghanistan, among other countries. Stable Diffusion also reproduces racial and gender stereotypes, like only depicting firefighters as white men. These are not particularly new problems with this kind of AI, as Abeba Birhane and Deborah Raji recently wrote in Wired. “People get hurt from the very practical ways such models fall short in deployment, and these failures are the result of their builders’ choices — decisions we must hold them accountable for,” they wrote.

Who is creating this AI, and why?

Generative AI isn’t free out of the goodness of tech companies’ hearts. These systems are free because the companies building them want to improve their models and technology, and people playing around with trial versions of the software give these companies, in turn, even more training data. Operating the computing systems to build artificial intelligence models can be extremely expensive, and while companies aren’t always upfront about their own expenses, costs can stretch into the tens of millions of dollars. AI developers want to eventually sell and license their technology for a profit.

There are already hints about what this new generative AI industry could look like. OpenAI, which developed the DALL-E and ChatGPT systems, operates under a capped-profit model, and plans to receive $1 billion in revenue by 2024, primarily through selling access to its tech (outside developers can already pay to use some of OpenAI’s tech in their apps). Microsoft has already started to use the system to assist with some aspects of computer programming in its code development app. Stability AI, the Stable Diffusion creator, wants to build specialized versions of the technology that it could sell to individual companies. The startup raised more than $100 million this past October.

Some think ChatGPT could ultimately replace Google’s search engine, which powers one of the biggest digital ad businesses in the world. ChatGPT is also pretty good at some basic aspects of coding, and technologies like it could eventually lower the overall costs of developing software. At the same time, OpenAI already has a pricing program available for DALL-E, and it’s easy to imagine how the system could be turned into a way of generating advertisements, visuals, and other graphics at a relatively low cost.

Is this the end of homework?

AI tools are already being used for one obvious thing: schoolwork, especially essays and online exams. These AI-produced assignments wouldn’t necessarily earn an A, but teachers seem to agree that ChatGPT can create at least B-worthy work. While tools for detecting whether a piece of text is AI generated are emerging, the popular plagiarism detection software, Turnitin, won’t catch this kind of cheating.

The arrival of this tech has driven some to declare the end of high school English, and even homework itself. While those predictions are hyperbolic, it’s certainly possible that homework will need to adapt. Some teachers may reverse course on the use of technology in the classroom and return to in-person, paper-based exams. Other instructors might turn to lockdown browsers, which would prevent people from visiting websites during a computer-based test. The use of AI itself may become part of the assignment, which is an idea some teachers are already exploring.

“The sorts of professionals our students want to be when they graduate already use these tools,” Phillip Dawson, the associate director of the Centre for Research in Assessment and Digital Learning, told Recode in December. “We can’t ban them, nor should we.”

Is AI going to take my job?

It’s hard to predict which jobs will or won’t be eradicated by generative AI. Greg Brockman, one of OpenAI’s co-founders, said in a December tweet that ChatGPT is “not yet ready to be relied on for anything important.” Still, this technology can already do all sorts of things that companies currently need humans to do. Even if this tech doesn’t take over your entire job, it might very well change it.

Take journalism: ChatGPT can already write a pretty compelling blog post. No, the post might not be particularly accurate — which is why there’s concern that ChatGPT could be quickly exploited to produce fake news — but it can certainly get the ball rolling, coming up with basic ideas for an article and even drafting letters to sources. The same bot can also earn a good score on a college-level coding exam, and it’s not bad at writing about legal concepts, either. A photo editor at New York magazine pointed out that while DALL-E doesn’t quite understand how to make illustrations dealing with complex political or conceptual concepts, it can be helpful when given repeated prodding and explicit instructions.

While there are limits on what ChatGPT could be used for, even automating just a few tasks in someone’s workflow, like writing basic code or copy editing, could radically change a person’s workday and reduce the total number of workers needed in a given field. As an example, Conitzer, the computer science professor, pointed to the impact of services like Google Flights on travel agencies.

“Online travel sites, even today, do not offer the full services of a human travel agent, which is why human travel agents are still around, in larger numbers than many people expect,” he told Recode. “That said, clearly their numbers have gone down significantly because the alternative process of just booking flights and a place to stay yourself online — a process that didn’t exist some decades ago — is a fine alternative in many cases.”

Should I be worried?

Generative AI is going mainstream rapidly, and companies aim to sell this technology as soon as possible. At the same time, the regulators who might try to rein in this tech, if they find a compelling reason, are still learning how it works.

The stakes are high. Like other breakthrough technologies — things like the computer and the smartphone, but also earlier inventions, like the air conditioner and the car — generative AI could change much of how our world operates. And like other revolutionary tech, the arrival of this kind of AI will create complicated trade-offs. Air conditioners, for example, have made some of the hottest days of the year more bearable, but they’re also exacerbating the world’s climate change problem. Cars made it possible to travel extremely long distances without the need for a train or horse-drawn carriage, but motor vehicle crashes now kill tens of thousands of people, at least in the United States, every year.

In the same way, decisions we make about AI now could have ripple effects. Legal cases about who deserves the profit and credit — but also the liability — for work created by AI are being decided now, but could shape who profits from this technology for years to come. Schools and teachers will determine whether to incorporate AI into their curriculums, or discard it as a form of cheating, inevitably influencing how kids will relate to these technologies in their professional lives. The rapid expansion of AI image generators could center Eurocentric art forms at the expense of other artistic traditions, which are already underrepresented by the technology.

If and when this AI goes fully mainstream, it could be incredibly difficult to unravel. In this way, the biggest threat of this technology may be that it stands to change the world before we’ve had a chance to truly understand it.