Silicon Valley Bank failure and its effects on the Startup industry

The Silicon Valley Bank (SVB) was shut down in March 2023 by the California Department of Financial Protection and Innovation. Before that it was the 16th largest bank in the US and has been a crucial part of the startup ecosystem in Silicon Valley, providing critical financial services to growing businesses, particularly in the tech industry. Those services ranged, from traditional banking products like checking and savings accounts, to more specialized offerings like venture debt and equity financing. The SVB was the largest bank to fail since the Washington Mutual closed during the financial crisis of 2008. 

Between 2019 and 2022 the bank saw a lot of growth and therefore had a big number of deposits and assets. Only a small amount of these deposits was held in cash while the rest was used to buy treasury bonds and other long-term debt. These tend to have less return but come with a lower risk. 

As the interest rate was raised by the Federal Reserve these initially low risk investments became riskier. Since investors now bought bonds at higher interest rates, the SVB bonds declined in value. At the same time, customers of the bank got into financial troubles and therefore began to withdraw their funds from the accounts. These large withdrawals had to be accommodated and to do so the Bank decided to sell some of its investments. As soon as investors became aware of such vulnerabilities, customers that had in excess of $250,000 (more that the secured value) of depositis ran to the bank to get their uninsured deposits out. 

On March 9th 2023 the stock for the SVB’s holding company (SVB Financial Group) crashed at the market opening. More and more customers started withdrawing money. On the 10th of March 2023, trading was halted for SVB Financial Group stock. Before the bank opened that day, federal regulators took over. 

Since the regulators were unable to find a buyer, deposits had to be moved to a bridge bank owned by the FDIC, with the promise that deposits, that were insured would be available by March 13th. On March 12thhowever, regulators announced emergency measures, allowing customers to recover all funds, including those that were uninsured. 

SVB Financial Group filed for bankruptcy on March 17th and on March 26th the First Citizens Bank bought the rest of the Silicon Valley Bridge Bank except for $90 billion of securities that remained in FDIC receivership. 

It was uncommon to announce that all depositors in SVB would be made whole – so what was the reason for this promise?

One of the reasons was the involvement of countless tech companies banked with SVB, that the US government wanted to evade the difficulties to spill over to the tech sector. Another reason was the fear of contagion and that therefore uninsured depositors of other banks would be tempted to run their banks. Furthermore, usually monetary authorities would reduce the interest rate and put more money into the banking system to help it regain its balance. But in this case, since inflation had to be tamed, it was not possible to do so. 

Even though investors got their deposits back, the quick meltdown of the SVB will most likely have short and long-term effects on the tech startup industry. Some founders for example are spreading their cash across multiple bank accounts to reduce their risks. The result could be that depositors will now favor established larger banks. Startups therefore would have fewer options when it comes to selecting banks, excluding smaller regional banks that could offer some advantages. Additionally, it might become even harder to fundraise for startups. SVB was a leading provider of venture debt, which is a loan to an early-stage company that provides liquidity to the business for the period between equity funding rounds. The closure of the bank happened simultaneously to the reclining of venture capital markets. This raises concerns as to how startups will get secure funding in the future. 

Although some investors fear that this incident could push startup investing further downwards, others are more optimistic and believe that it could actually go in the opposite direction. Only the future will tell, if and how the startup industry will recover from this incident. 


ChatGPT: The AI chatbot everyone’s talking about

Last week, Open AI released ChatGPT, an AI chatbot that can understand human language and generate human-like answers. In particular, this chatbot can help you write code, compose essays and engage in conversation or even philosophical discussions.

Thanks to the easy-to-use interface, it has already been used by over 1 million users. 

But what is ChatGPT?

It is an implementation of the new GPT-3.5 natural language generation technology. This is an improvement from GPT-3, the previous Open AI’s language model. The latter was criticised since it often generated toxic outputs, made up facts, or generated violent content without explicit prompting.

To solve this problem, ChatGPT was trained using a huge sample of text taken from the internet, such as Wikipedia entries, social media posts and news articles. However, it also employed Reinforcement Learning from Human Feedback (RLHF). This is a technique that uses feedback from humans to train a better model. 

In particular, humans provided conversation data of both the assistant and the user, therefore generating a dataset of what they considered to be a good response. 

Moreover, the outputs of the model were evaluated by humans which ranked them from worst to best. 

The results are impressive: if you try to ask a question, it will provide a complete and explicative answer that seems human-made. It’s enjoyable to speak with and useful enough to ask for information. 

Differently from the previous model, it is also possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests. Moreover, it will take stances and will try not to produce harmful content. For instance, if you ask the chatbot what Hitler did well, it refuses to list anything.

While this might already seem like a lot, the system is just “an early demo of what’s possible”, according to Sam Altman, OpenAI CEO. Tools such as these can significantly help humans, for instance by correcting code or by providing new ideas. If reliable, they can be used to research information easily, without browsing different websites, like Google.

However, ChatGPT is still flawed. In particular, it inherited some of the problems of GPT-3: for instance, it can write “plausible-sounding but incorrect or nonsensical answers”, the company itself said. Therefore, when searching for information, you should still be careful of the results, since they might be perfectly made up!

Moreover, by just using some simple shortcuts usually biases arise. This is usually done by giving the chatbot specific prompts. For instance, if you ask the chatbot to pretend that it is evil, you can usually bypass the moderation filters. There are already lots of examples on the Internet of these kinds of problematic behaviours arising.
If you want to try the model yourself, it is free to use during the research preview: try it here! But be careful, and don’t trust everything you read!

The FTX bankruptcy: a closer look at what went wrong

This month, one of the world’s largest cryptocurrency exchanges, FTX, filed for bankruptcy, and its former CEO and founder, Sam Bankman-Fried, resigned. This collapse has been a shock to the cryptocurrency industry, and it has happened during a tough year for it.

Like a usual cryptocurrency exchange, FTX enabled customers to trade digital currencies for other digital currencies or traditional money, and vice versa. It was based in the Bahamas and, at its peak, the company was valued at $32 billion.

In order to understand better what happened, we need to look at Sam Bankman-Fried.

In 2017, he co-founded Alameda Research, a small quantitative trading firm that focused on arbitrage trading. That is, the company bought cryptocurrencies in one market and sold them in another, pocketing the difference.

In order to have more funds for Alameda to run its business, Sam Bankman-Fried decided to create FTX in 2019. FTX almost immediately created its own token, FTT, that worked like a loyalty program for customers. People could decide to use the token to trade cryptocurrencies with discounted transaction fees. At the same time, however, FTT was also bought and sold like a normal token.

To maintain the value of FTT, Alameda Research served as the token’s main market maker, which means it bought and sold most of FTT on the exchange. 

At the same time, however, Alameda also used FTT to make speculative bets on other cryptocurrencies, that is they used FTT holding as collateral for more loans to facilitate its trading activities. In other words, customers gave money to FTX to buy FTT, and this allowed Alameda to make risky investments, backed up by its FTT holdings.

Therefore, this meant that Alameda was very exposed to volatility in FTT, so if the value of FTT fell, Alameda would be unable to pay back its lenders.

When cryptocurrencies prices went down earlier this year, the price of FTT also went down. This implied that Alameda struggled to pay lenders back, and FTX helped the company by using funds that customers had deposited with the exchange.

The events that shook up FTX in early November are related to a CoinDesk article that questioned FTX’s solvency. In particular, CoinDesk reported that Alameda Research’s balance sheet of $14.6 billion had assets for FTT for a total of $5.5 billion.

After this report, the CEO of Binance, one of the biggest cryptocurrencies exchange, sold its FTT holdings, setting off a chain reaction of customers that began withdrawing assets from FTX. This made the price of FTT fall down even more, and FTX found itself in a liquidity crunch, unable to process all those withdrawals due to lack of money.

Binance later announced a tentative deal to buy FTX, but after finding too many holes and problems in the company’s finances it decided not to proceed with the offer.

After these events, FTX and several connected companies filed for bankruptcy and Sam Bankman-Fried resigned. The bankruptcy filing, done by the new FTX chief, described numerous corporate missteps, including the use of software to “conceal the misuse of customer funds”.

The price of FTT has dropped more than 90 percent since early November. Sam Bankman-Fried, instead, saw his wealth plummet by over 94%, from 15 billion to less than 1 billion in a single day.

Right now, FTX said that it owes its 50 biggest creditors nearly $3.1 billion. In total, the firm has $8 billion to repay.

Adding to the company’s problems, the Royal Bahamas Police Force said it is investigating FTX. Moreover, the U.S. Department of Justice and the Securities and Exchange Commission are already examining the company, in order to assess whether they improperly used customer funds to prop up Alameda Research.

COP27: how growing tensions between developing and wealthy countries are shaping the talks

The issue of financing the fight against climate change monopolized the two-week long COP27, the U.N. climate talks in the seaside resort town of Sharm el-Sheikh, Egypt. They especially focused on how wealthier countries can help less developed ones cut emissions and adapt to the dramatic impacts of global warming. With this aim, U.N. experts published a list of projects worth $120 billion that investors could back.

However, another report suggested that developing countries would need to secure $1 trillion in external financing every year by 2030. To this sum, these countries would have to add their own funding to effectively meet the world’s goal of preventing climate change. By contrast, the world’s leading development banks only lent 51 billion dollars to developing countries in 2021, nowhere near enough to make a dent into this issue. For example, Italy, Britain and Sweden were among the countries that pledged more than $350 million to finance solutions to the climate crisis in countries such as Egypt, Fiji, Kenya and Malawi. Britain’s latest Prime Minister, Rishi Sunak, has underlined that “government spending to combat climate change is the right thing to do, from an environmental, moral and economic perspective”.

Leaders from poor countries criticized wealthy governments and oil companies for driving global warming, using their speeches at the COP27 to demand that they pay for damages being inflicted on their economies. In particular, oil companies were called on to use some of their enormous surplus profits to help countries already affected by increasing natural disasters, such as floods and droughts, adapt to these new conditions. Gaston Browne, Antigua’s prime minister, speaking at the conference on behalf of the Alliance of Small Island States, declared, “the oil and gas industry continues to earn almost $3 billion in daily profits. It is about time that these companies are made to pay a global carbon tax on their profits as a source of funding for loss and damage. While they are profiting, the planet is burning”.

The comments reflected increasing tension in international climate negotiations between rich and poor states. Multi-billion-dollar oil industry profits since Russia’s invasion of Ukraine in February, which has roiled markets and disrupted supplies, have angered governments worldwide concerned about climate change and rampant consumer inflation. Senegal’s President Macky Sall told the conference poor developing countries in Africa need increased funding from wealthy nations for adaptation to worsening climate change, and would resist calls for an immediate shift away from fossil fuels African countries need to expand their economies. Although they are in favor of developing renewable energies, they currently still need considerable amount of fossil fuel, just like wealthy countries have used to achieve economic growth. Demand for compensation for the “loss and damage” caused by global warming has long been rejected by wealthy countries, whose leaders are wary of accepting liability for the emissions driving climate change.

Finally, Ukrainian President Volodymyr Zelenskiy told conference delegates in a video message that Russia’s invasion of Ukraine has distracted world governments from efforts to combat climate change and boosted demand for coal: “There can be no effective climate policy without peace.”

What are the plans for Twitter after Elon Musk’s takeover?

After months of negotiations, Twitter Inc., the American communications company based in San Francisco, California, was recently purchased by Elon Musk, the CEO of SpaceX and Tesla. It was sold for $44 billion, at a price of $54.20 a share. The billionaire financed the takeover with his own money, along with other investors and a $13bn debt. The agreement was signed on 27 October officially making Musk the owner. At this same moment Twitter’s stock was suspended from trading on the New York Stock Exchange, the company is set to be delisted on November 8, 2022. 

This past Thursday Elon Musk has taken control of the company officially, four of Twitter’s top executives have been fired, including Parag Agrawal, the chief executive; Ned Segal, the chief financial officer; Vijaya Gadde, the top legal and policy executive; and Sean Edgett, the general counsel. This Monday, the company’s filings have revealed that Elon Musk has appointed himself CEO and has fired the rest of the board of directors made up of nine non executive directors. The billionaire changed his twitter profile to “Chief Twit” to announce his new position. 

Even more, Musk’s plans as new owner of Twitter include extensive layoffs, rumoured to be around 75% of the workforce, in an effort to pay down the debt burden that has grown substantially since the start of his acquisition. These claims, however, have been denied by Elon Musk himself, instead confirming a first round of layoffs that will cut out 25% of Twitter’s staff, a move which has already disrupted the workforce and created tension among employees. 

Musk’s acquisition of Twitter expects changes in content too. The entrepreneur signals plans to overhaul how Twitter has moderated the spread of information across its platform (the social media platform has been long accused of favouring left-leaning, liberal messages, which it has denied). He aims to maximise ‘free speech’ in the app.“Free speech is the bedrock of a functioning democracy, and Twitter is the digital town square where matters vital to the future of humanity are debated,”. Musk has strongly disapproved permanent bans for those who repeatedly violate the rules, bringing up the possibility of reemergence of previously banned, controversial users like former US president Donald Trump who may be readmitted in the platform after Twitter’s change of ownership. His new approach to information has created distress among many researchers and brands who claim Twitter’s rules have been essential to countering online hate speech and disinformation and creating a ‘healthy’ platform. Musk has responded to the criticisms by assuring Twitter will not become “free-for-all-hellscape where anything can be said with no consequences.” Moreover, he has assured Twitter will create a new content moderation council with diverse viewpoints which will decide on content decisions and review reinstatements. Musk also intends to improve the platform by getting rid of spam and bot accounts which he believes impairs the site. 

Another change in Twitter’s structure includes making the company private. As soon as the agreement was signed, the company was no longer traded in the stock exchange. But what are the consequences of this? Musk will no longer have to answer to shareholders meaning he will be able to make changes to the service freely, imposing his vision and singular strategy. Being a private company will also avoid public scrutiny as it will no longer be required to make quarterly disclosures about the health of its business. 

Reportedly, the innovative entrepreneur has also plans to make the social media company into an “X, the everything app”, an app similar to WeChat in China which incorporates different services including: social media, messaging, finance, food orders and other features. 

To conclude, Elon Musk’s acquisition of one of the largest social media apps in the world has come with a variety of plans aside. Freedom of speech seems to be the entrepreneur’s main focus, however, the level of power he has as an individual CEO and the level of deregulation he intends to impose may suffer drawbacks from regulatory agencies, especially in Europe. The innovation of the content of the platform might have irreversible effects on the social and political ecosystem, which is why these changes must be well analysed before brought to practise.

What is happening to China’s chip industry?

In October, the US introduced extensive controls on chip exports in an attempt to slow China’s progress in artificial intelligence and supercomputers and make it more difficult for the country to produce advanced semiconductors. The controls are probably the toughest measures taken by President Joe Biden against China and the first serious attempt to slow down its military modernisation by targeting the technologies behind everything from modelling nuclear weapons to developing hypersonic weapons.    

China’s leading chipmaker Semiconductor Manufacturing International Corporation, which makes logic chips for powering computers, will be hit by restrictions that will prevent US companies from supplying technology for chips more advanced than 14 nanometres or, in some cases, 16 nm. The rules will impact areas such as maintenance, equipment replacement and memory chip manufacture. Some of the more advanced products already meet the thresholds set by the US for memory chips, YMTC, for example, will suffer from US restrictions on the export of technology for the production of their most advanced chips. Without access to US technology, China will struggle to maintain its rapid expansion in artificial intelligence and supercomputing – two important areas for the Chinese military – and cloud computing. However the controls could also backfire in the long run, as they could turbocharge the Chinese own chip industry. 

At the same time the sanctions will affect US companies too, according to analysts, the impact on them depends on how aggressively the US will apply the controls. Many US companies that produce chips or tools for chip production point to China as their main market. China accounts for 33% of Applied Materials’ sales, 27% at Intel and 31% at Lam Research. Applied Materials said the restrictions will cut 6 per cent, from next quarter’s sales. Nvidia, which will not be able to export its advanced graphics processing units used in machine learning systems to China, also estimated a quarterly impact on sales of $400 million, or 7% of its sales. Lam Research, a major supplier to China’s YMTC, estimated a cut up to $2.5 billion, or 15% of its 2023 sales. But some US companies could benefit, such as memory chip maker Micron, which is facing growing competition from YMTC. 

According to experts, Beijing has limited capacity to retaliate. Last year, China passed a law allowing countermeasures against sanctions, but it has not yet been used in response to Washington’s tightening of controls on semiconductors or to retaliate against other US moves. Some experts suggested that China is unlikely cut off technology giants, including Microsoft and Apple, from its huge consumer market and Beijing will prefer to seek for an agreement. However as far as the political outcome the more imminent risk is that Biden’s gamble could prompt Xi Jinping, China’s president, to accelerate his timetable for Taiwan reunification. The island state is by far the world’s largest maker of high-end chips. That Biden’s move took place shortly before China’s 20th party congress, is notable. Many China watchers think Xi wanted to put the party congress behind him before turning to his vow of fixing the Taiwan problem. Biden could have made a violent resolution to China’s Taiwan policy more likely. He could equally have given Xi pause for thought. We will find out.

Finally spillovers to other sectors are likely to happen, on 7 October, the US added several Chinese companies, including YMTC, to the ‘unverified list’ of entities for which Washington has been unable to conduct end-user audits to verify that US technology is being used for legitimate purposes. If these problems are not resolved within 60 days of listing, the company will almost certainly be placed on the ‘entity list’, which will effectively ban US companies from supplying them with technology.  European officials believe the US is likely to extend the range of stricter measures, with knock-on effects for EU companies. Some analysts warn that most Chinese manufacturers could run out of stock, triggering a chip shortage that would affect other industries, such as consumer electronics, medical devices, aerospace and cloud computing. A chip shortage could cause risks, such as a general slowdown in vehicle deliveries or a further deterioration in the profitability of Chinese car manufacturers.


A look at new translation tools – Breaking barriers thanks to AI

Most translation tools commonly used today are focused on written high-resource languages that have lots of translation data available. However, this approach doesn’t allow for the accurate translation of low-resource languages – that is, with little to no data available – and of oral languages, where we cannot find corpora of written text, for example, due to the lack of a standardized writing system.

While it might not affect us directly, it is still important to focus on such problems, since around 20 percent of the world’s population does not speak languages that are covered by the usual translation models. This implies, for instance, that people from these low-resource communities cannot easily access information online. Take Wikipedia: for languages like Lingala, spoken by over 20 million people in Africa, there are around 4000 articles available, far less than the 2.5 million available in Swedish, a language spoken by around 10 million people. If there were an efficient translation tool for Lingala, this would allow Lingala speakers to read as much information as Swedish speakers. Therefore, building an accurate and inclusive translator is very important.

Already back in February, Meta announced that it wanted to tackle this problem through 2 new projects: the first is NLLB (No Language Left Behind), whose goal is to build an AI translator also for languages that have limited translation data; the second one is UST (Universal Speech Translator), in which they want to translate languages speech-to-speech in real-time.

NLLB (No Language Left Behind)

Let’s focus on the first project. The first big achievement came back in July 2022, when they announced a new model that allowed the translation of 200 languages, many of which were previously low-resource ones.

The main problem they had to face was indeed the unavailability of data. In particular, they used 3 types of data:

  • Translation data already in circulation: this also includes biblical translations.
  • A human-curated dataset that they created themselves, called NLLB seed.
  • Monolingual data used for bitext mining, that is for identifying translation relationships among the words.

To exploit the data in the best way possible they used LASER3, a toolkit that allowed them to identify sentences with a similar representation in different languages and understand how likely it is for them to have the same meaning.

In particular, this tool embeds all of the sentences coming from different languages into a single multilingual representation space. Then, by computing the distance between the text that we want to translate and the actual translation, you can check whether or not it is a plausible translation or not. 

Moreover, using this toolkit, it is relatively easy to adapt the result of the training on some languages to some others. This means that you can include languages in the multilingual space even if little data is available.

The other important challenge they had to face was fitting 200 languages into a single model, without overfitting for the high-resource languages. So they tried to ensure that the low-resource languages got a fair capacity allocation despite the much smaller amount of training data.

The results that they were able to achieve were very good and the translations were of high quality. With respect to other translation tools, there has been an average increase in quality of 44 percent across all languages, with an increase of over 70 percent for some African and Indian ones. They also expanded the existing dataset for the evaluation of translation tools by developing FLORES-200, a dataset that includes over 200 languages. 

UST (Universal Speech Translator)

About the second project, last week Meta announced a new model that allowed the translation of a primarily oral language, Hokkien, into English. Hokkien is a variety of Chinese that is widely spoken in southeastern China (around 45 million people speak it) and it is one of many oral languages without an official writing system. 

To translate Hokkien, typical models cannot be used, as they rely on transcriptions: they first convert speech into text, then they translate the text to the target language and then they convert the result back to speech. However, since Hokkien doesn’t have a standard written form, producing transcripts doesn’t work. Therefore, you need to directly focus on speech-to-speech translation.

As with NLLB, the first problem that they had to face was the lack of data. To solve this, they used Mandarin as an intermediate language. So, they first translated English (or Hokkien) vocal data to Mandarin text, and then they translated it back to Hokkien (or English). By doing so, they exploited the higher presence of resources and translations in Mandarin.

They also used audio mining, that is they managed to embed Hokkien into the same space of the other languages, even without a written form. Therefore, the vocal content in Hokkien could be put in relation to other similar text and vocal contents.

For the translation itself, they used various methods, such as a recently-developed technology called S2UT (speech-to-unit translation), which translated input speech into acoustic sounds. From these, they generated wavelengths corresponding to the translation.

To evaluate the results, they transcribed the audio into a standardized phonetic alphabet called Tâi-lô. Through this method, they were able to assess the quality of the translation. Moreover, they created the first dataset for speech-to-speech translations from Hokkien to English. 

For now, this model allows translating only a sentence at a time, but it is still an impressive achievement.

What are the effects of such technologies?

We have to be careful when using translation tools especially if they are not high-quality, since we might produce content that could be harmful. To assess the performance of a model, the most used algorithm is BLEU: it compares the given translation with a set of good-quality reference translations. However, this method is flawed in many ways because:

  • A sentence that was translated correctly could still receive a low score depending on the data that we use for the assessment.
  • It doesn’t take into account the difference in the gravity of an error. For instance, if you want to translate the word for “cat”, but the model gives you “kitten”, the error is regarded in the same way as if it gave you the word “computer”. So it is not an absolute measure of the correctness of the model.

Considering the possible errors of translations and their consequences is very important when using these tools in platforms such as Facebook and Instagram. There have already been cases where not accurate translation has led to unwanted consequences. For instance, in 2017 a Palestinian man was arrested by Israeli police after Facebook’s translation software mistranslated “good morning” with “hurt them/attack them” in a post he shared. To mitigate this problem, in the NLLB model Meta created a toxicity list for all the 200 languages, that removed unwanted toxic content that appeared during the translation.

Still, the benefits of such technologies are enormous. Better machine translation could mean more people interacting with each other, sharing ideas and knowledge, and this would break language barriers both in the real world and on the Internet. For instance, people who only speak a dialect could access the same amount of information as those who speak English. 

These models are already starting to be used. For instance, a partnership with the Wikimedia Foundation has given access to NLLB to the hands of those who help to translate the millions of articles on Wikipedia.

Electric Vehicles – What China’s Leadership Can Teach Europe

As the European Union moves to ban the sale of new petrol and diesel cars from 2035, car makers and countries are taking steps to help smooth the transition. While the majority of electric vehicle (EV) markets are still heavily dependent on subsidies and financial incentives from governments, China’s electric cars are now competitive, attractive and affordable. This makes it a crucial case study for European companies who wish to follow in their footsteps.

Of the world’s 10 best-selling EV brands, half are Chinese, led by BYD, which lags only Tesla in global market share and is starting to ship its electric cars abroad. It took China more than a decade of subsidies, long-term investments and infrastructure spending to lay the foundation for its EV market to start standing on its own. Since 2009, 14.8 billion US dollars in subsidies have been provided to EV consumers. Direct state aid was supposed to end in 2020, but to help sales rebound from the pandemic in 2020, the government extended monetary incentives and purchase-tax exemptions of EVs until 2023.

China now has the biggest electric cars market in the world, making up 57% of global electric vehicle sales. Producers compete on prices and features, a sign that they have stopped being dependent on subsidies. But financial incentives are not the only reason for the Chinese to switch to electric cars. Many cities have introduced additional restrictions on fuel-powered cars, and advantages to electric ones. In Chengdu, Southwest China, traditional cars are restricted from being on the road certain days of the week to help reduce congestion and pollution. Electric vehicles, however, are free to come and go. For electric cars, parking is free for the first two hours at public parking lots. These incentives lower the costs of driving electric cars and boost their attractiveness.

China doubled its number of charging stations in one year, reaching around four million in the country, an essential step to ensure that drivers of electric vehicles have the reassurance that they will be able to charge their batteries as necessary. Rolling out adequate charging infrastructure is one of the main challenges faced by producers of electric cars, as batteries still offer less driving autonomy than fuel tanks, and take much longer to recharge. This generates what is known as “range anxiety” among prospective EV buyers hesitant to make the switch.

As of today in Europe, EVs are suitable for those who commute to work daily and can recharge their batteries every night, but they have limited use for any type of longer drive, as the availability of charging stations is unpredictable throughout Europe. According to the Electric Vehicle Database, the average battery range currently sits at 326 km, enough for everyday use. But a cross-border European road trip would require charging along the way, and the infrastructure across the continent remains patchy in many areas. Charging stations are still unevenly distributed, and car producers and governments are pushing to make them increasingly available to pave the way towards a greener future.

Corporate Venture Capital Strategy – The Entrepreneurial Setting in China

A decade ago, established businesses in several industries were exceedingly frightened of the impending flood of disruptive forces. This was especially true in the FinTech industry, where a new breed of quick-footed, agile arrivals equipped with the Y Combinator playbook threatened to wreck havoc among ancient incumbents everywhere. 

Several years later, though, and the situation had substantially changed, with incumbents starting to collaborate closely with entrepreneurs. The situation has once more evolved now, with established businesses playing a key role in financing entrepreneurial endeavors, a technique called as corporate venture capital (CVC).

However, as Gary Dushnitsky and Lei Yu demonstrate in a recent research paper, this local concentration provides for a relatively constrained lens to comprehend global CVC practices and goals. The study was inspired by a perplexing oddity, according to Dr. Dushnitsky: corporate venturing has spread well outside the US context. More than 60% of all CVC deals worldwide were made in the US in 2013, but by 2018, that number had dropped to 41%, putting it on pace with CVC activity in Asia.

The literature is heavily influenced by research on corporate venturing in developed nations, but it seems improbable that CVC investors in developing nations are doing so to gain a window on technology because startups in these nations frequently benefit from dramatic demand growth and are not typically a source of novel technologies; as a result, the existing scholarly explanation cannot account for the growth of CVC in China. 

The researchers used an abductive method to examine the origins of CVC in China in order to explain the oddity. China was selected as the study’s location because, according to Dr. Dushnitsky, it is “a thriving entrepreneurial climate” and is second only to the United States in terms of the total number of startups and investment received.

Furthermore, famous CVC proponents in China who were quoted by the authors in their research seemed to have distinct motivations than the window-on-technology orthodoxy. For instance, Bangxin Zhang, Chairman and CEO of the educational technology company TAL Group, stated at the company’s 2016 annual conference that TAL was committed to seizing the numerous opportunities it saw in the education sector but that some of those opportunities would “not be suitable” for the company to develop internally, so instead it would use its “capital, business, and resources to support external ventures.”

Similar to this, Alibaba Vice President Joseph Tsai stated at the company’s 2017 Investor Day that when evaluating potential CVC targets, Alibaba’s top priorities were determining whether the target company could help the company increase user numbers and improve engagement, enhance the customer experience, and expand its portfolio of goods and services.

Dushnitsky and Yu built a comprehensive dataset of Chinese CVCs active in the late 2010s by fusing Chinese and foreign databases, guided by these and similar industry insights. Instead of the common perception of CVC as a window on technology, analysis of the dataset to identify cross-industry CVC patterns reveals an alternate CVC purpose that is primarily related with harnessing growth through market expansion. The results “reflect the characteristics of the Chinese scenario, where entrepreneurs profit from the dramatically increased economic activity and serve as a vehicle to harness the global innovation frontier,” claims Dr. Dushnitsky.

The researchers reference earlier research that concludes that “firms in developing countries can enjoy superior performance by leveraging rapid industry expansion; all while using technologies that already exist in the developed world”, again indicating that industry growth is an important factor in determining investments in the developing world. In addition to the conventional technology-based interpretation of CVC activity, Dushnitsky and Yu thus offer a market-based explanation. Or, to put it another way, they contend that “many investors are attracted to entrepreneurial enterprises that pursue growth by addressing rapidly rising market demands (market-based), rather than by developing novel technology (technology-based).”

The researchers reference earlier research that concludes that “firms in developing countries can enjoy superior performance by leveraging rapid industry expansion; all while using technologies that already exist in the developed world”, again indicating that industry growth is an important factor in determining investments in the developing world. In addition to the conventional technology-based interpretation of CVC activity, Dushnitsky and Yu thus offer a market-based explanation. Or, to put it another way, they contend that “many investors are attracted to entrepreneurial enterprises that pursue growth by addressing rapidly rising market demands (market-based), rather than by developing novel technology (technology-based).”

The researchers also took into account a potential third explanation for CVC rise in China: a government-based argument, given the very different institutional structure that China represents in comparison to Western institutions. According to Dr. Dushnitsky, this is “compatible with the concept that venture activities differ across institutional environments,” where “CVC would be particularly salient in industries of national strategic interest.” 

Given the command-and-control nature of the Chinese market context (and the importance placed on it in the academic literature), it may seem counterintuitive, but the study’s key finding is that, at the time the study was conducted (the late 2010s), the cross-industry CVC patterns in China did not correspond to any of the sectors that the government had prioritized as sectors of national importance.

As an alternative, the researchers contend that while “a broad range of antecedents drive incumbent-startup interactions in the Chinese scenario, [CVC] activity largely adopts a “harness industry expansion” rationale and, to a lesser extent, displays a “window on technology” purpose. 

This, according to Dr. Dushnitsky, “underscores the contrasting roles of CVC (in particular) and incumbent-startup partnership (more broadly) in unlocking corporate development,” which is likely caused by fundamental disparities between the US and Chinese economies.

The research provides a more comprehensive set of justifications for the possible application of CVC technique. According to the study, while technological advancement and a robust corporate R&D foundation frequently fuel CVC operations in industrialized nations, these drivers are only “partially supported” in China. The research shows that although “technology advancement might be a major enticement for CVC operations, its impacts do not rely on the R&D bases inside a given company.” Industries in China that offer businesses resources and chances to grow, such as the benevolent industries, display a high degree of CVC activity. 

The study’s findings “have the potential to substantially redefine our understanding of CVC strategy and its objectives,” according to Dr. Dushnitsky, and go beyond regional coverage.

The key point is that CVC is not only a strategy for getting a window on technology; it can also be used to harness market growth, even though it’s important to note that the study’s findings are for the period of 2014–18 (hence, because the Chinese setting has changed significantly since, they are not necessarily indicative of current VC practices in China). 

Every company has a marketing plan, but not all companies have the same marketing strategy, according to Dr. Dushnitsky’s instructive illustration. Our study uses China as the backdrop to demonstrate that idea. In other words, companies may employ CVC as a tactic to capitalize on market growth in nations with rapid GDP development.

AI in Art: What Does It Mean?

Far from being distant, the points of encounter and collaboration between art and artificial intelligence are increasingly numerous. In addition to changing the concept of who can produce art and providing new ways to practice it, artificial intelligence is indeed a new way to approach and study the aesthetic experience, making it even more engaging and participatory. 

Emotional Intelligence
At Stanford University a team of researchers has taught computers to recognize not only what objects are present in an image, but how those same images make people feel, creating algorithms with “emotional intelligence”. The group of scientists has developed an algorithm called ArtEmis, which is based on 81 thousand WikiArt paintings, supported by 440 thousand responses collected from over 6,500 participants who have evaluated each painting based on the emotion felt in the fruition, also providing a brief explanation of the emotional reaction chosen.
Using these responses, the team trained the algorithm to classify a painting into one of eight emotional categories – from astonishment to amusement, from fear to sadness. The algorithm, trained in this way, can analyze a new image that it has never seen, classifying it based on the emotion a viewer might feel in front of it. Moreover, it doesn’t just capture the full emotional experience of an image, it can also decipher different emotions within the painting.

When AI is the artist
In the field of computer learning, we define Generative Adversarial Networks (GAN), a couple of neural networks that are trained to compete against each other. One is called generator and has the task of producing new data, the other discriminator and learns how to distinguish them from those artificially created.
Through this dialogue, a GAN is able to process an impressive amount of data, escaping human control, with completely unexpected results. It is possible to use GANs, for example, to create absolutely realistic photographs of people who do not exist, starting from an adequate number of real images. 
In October 2018, for example, an artwork by Edmond de Belamy, created with the help of an AI algorithm, sold at auction for $432,500 at Christie’s auction house. According to Christie’s, the portrait had been created through the use of Artificial Intelligence. To create a portrait of Edmond de Belamy 15,000 portraits painted between the 14th and 20th centuries were entered into the system. The two networks did the rest.
Moreover, there are several experimental platforms – such as Artbreeder, for example – that make this process possible for anyone who wants to try their hand at it. A sort of “collaborative” artistic tool, open source and accessible to anyone, to create new images by themselves through algorithms made available to users. 

Works classification
But artificial intelligence can also be useful in classifying works by artist, genre, and style. As more and more works of art become digitized, teaching computers to classify art is enough to assist museum staff in performing these tasks.
Researchers at Zhejiang University of Technology, in China, recently published a paper on this topic, testing seven different algorithm models on three different groups of artworks and comparing the performance of individuals, in classifying the works, when using such a tool or not. According to the article, the neural network models and computer vision techniques used provided state-of-the-art and highly refined results.