Christos Makridis Christos Makridis

Housing market data suggests the most optimistic buyers during the pandemic are more likely to stop paying their mortgages

This article was originally published in Fortune Magazine with William Larson.

Traditional methods for forecasting housing prices and broader economic indicators are proving insufficient. In our recent research, we explored an overlooked aspect of home buying: the significance of buyers’ expectations. We found that the anticipations of mortgage borrowers regarding future housing prices are crucial for understanding the health of the economy.

There’s a consensus that the expectations about future increases in housing prices and interest rates significantly influence housing market dynamics. The logic is straightforward: If individuals believe the value of homes will rise, they are more inclined to take on more debt. This effect is amplified in the housing market because you cannot bet against market downturns, making the positive outlooks of buyers more influential. Previous studies have indicated that this optimism can drive rapid increases in housing prices, creating “bubbles.” These bubbles often lead to inflated house prices, fueled by speculation.

What occurs, however, when housing prices remain elevated but expectations begin to decline?

Our findings indicate that expectations are critical in the decision-making processes of mortgage borrowers. During the COVID-19 pandemic, there was a period when confidence in future housing price increases waned, despite actual prices still rising.

We observed that borrowers who were initially the most optimistic about price increases were significantly more likely to request mortgage forbearance–a pause or reduction in payments–by about 50% more than the broader mortgage-borrowing population (6% versus 4% in our study) during this episode. This underscores the significant impact of borrower expectations on the housing market and economic stability.

Expectations trump reality

We began our research with data from the Federal Housing Finance Agency, specifically the National Mortgage Database, and noticed something intriguing: Before 2020, people who were positive about the future increase in house prices were more likely to pause their mortgage payments early in the COVID-19 pandemic, despite the fact that house prices were still going up. This observation led us to understand that these borrowers were reacting more to their expectations about the future than to the actual market conditions at the time. When their outlook on house prices temporarily worsened, they opted for forbearance. However, as their optimism returned towards the end of 2020 and throughout the pandemic, these same borrowers began resuming their mortgage payments.

This pattern underscores how crucial expectations are in shaping how borrowers act, which, in turn, has significant effects on the broader economy. After our study period, which ended in 2022, expectations dropped substantially heading into 2023. Our findings suggest that the wave of optimistic borrowers between 2021 and mid-2022 may be particularly vulnerable to such drops in expectations if paired with negative equity or job loss. Thankfully for the mortgage market, the economy–and house prices–remained strong throughout this most recent episode of falling expectations.

Our research serves as a warning to those involved in housing policy and finance: It's essential to consider what borrowers are thinking and expecting, not just the usual financial indicators like interest rates, monthly payments, or how much debt they're taking on compared to the value of their home.

Understanding people's expectations is tricky–they're hard to measure and introduce a challenge known as adverse selection, where borrowers have more information about their ability to pay back loans than the lenders or investors do. Discovering that something not typically tracked by mortgage investors, like borrower expectations, can have a big impact on whether loans are paid as agreed is striking and warrants more attention.

For those regulating and monitoring the housing market, grasping the relationship between what people expect and what's actually happening can lead to better forecasts and smarter policymaking.

Christos A. Makridis, Ph.D., is an associate research professor at Arizona State University, the University of Nicosia, and the founder and CEO of Dainamic Banking.

William D. Larson, Ph.D., is a senior researcher in the U.S. Treasury’s Office of Financial Research, and a non-resident fellow at the George Washington University’s Center for Economic Research. This research was conducted while Larson was a senior economist at the Federal Housing Finance Agency (FHFA). The views presented here are those of the authors alone and not of the U.S. Treasury, FHFA, or the U.S. Government.

Read More
Christos Makridis Christos Makridis

Why Solana will prevail despite Ethereum ETFs

This article was originally published on Cointelegraph (with Connor O’Shea).

The cryptocurrency world is abuzz with bullish sentiment thanks to Bitcoin spot ETFs. Investors have been quick to accept that Ether spot ETFs will follow in the months ahead.

Many investors have begun to speculate that many altcoins will also have ETFs, which has led to price appreciation and a lot of speculation. But all the enthusiasm has led many to overlook an obvious contender to Ethereum — Solana, which has beat many expectations and continues to boast a sophisticated tech team.

There has been no shortage of stories talking about Solana and its links to FTX founder Sam Bankman-Fried, with many predicting its demise. However, Solana has weathered the storm according to a handful of metrics. For instance, the number of active addresses on the network has nearly reached its 2022 level. The number of new addresses has continued to grow at almost as fast a rate as it was in 2022 as well. In fact, the number of unique active wallets (UAW) is up from 2022.

And that’s not to mention the reality that active addresses can be manipulated. An alternative metric, namely capital efficiency (i.e., decentralized exchange volume per dollar of total value locked), suggests Solana has outpaced Ethereum in recent months.

To be sure, Solana isn’t operating perfectly, but it has clearly defied the expectations of many who thought it might collapse following the FTX blow-up. A large part of its recovery after FTX, and growth since then, has been driven by seemingly wise leadership, which in turn affects their technological investments, strategy, and ultimately community engagement.

The blockchain technology landscape, particularly at the layer-1 (L1) level, currently falls short of the transformative financial future many envisioned. While the initial promise of blockchain offered a vision of faster, cheaper, more efficient, and censorship-resistant financial systems, the reality today presents significant challenges. 

The L1 landscape is characterized by fragmentation of liquidity across a range of layer-2 solutions (L2s) and an absence of scalability that hampers efficiency and user experience in the decentralized exchange space, coupled with concerns about the degree of centralization in the centralized exchange space. This fragmentation has led to a piecemeal ecosystem where the seamless integration and interoperability necessary for a truly revolutionary financial system remain elusive. As a result, the blockchain community finds itself at a crossroads, seeking solutions that can fulfill the early promises of this technology.

Efforts to scale blockchain technology today are diverse, with each project taking a unique approach to overcome limitations in speed, efficiency, and interoperability. The Ethereum blockchain, for example, is pursuing a multi-layer strategy, incorporating both layer-2 scaling solutions and sharding to increase transaction throughput without sacrificing security.

Meanwhile, projects like Cosmos and Polkadot are exploring a multi-chain architecture that allows for specialized blockchains to communicate and transact seamlessly. Solana, along with newer entrants like Sui and Aptos, proposes an alternative approach, focusing on high throughput and efficiency at the layer-1 level itself. Each of these approaches represents a different path towards achieving scalability, with their own set of trade-offs between decentralization, security, and performance. The variety of solutions underlines the complexity of the scalability challenge and the blockchain community's commitment to finding a way forward.

Solana stands out for its unique approach to addressing the core issues plaguing the blockchain ecosystem and its robust community support — evidenced by its resilience post-FTX and the success of its global hackathons, which underscore the platform's strong foundation. And significant UX improvements, notably with mobile integration through Saga phones and competitive platforms like Jupiter — which rivals Uniswap — make Solana highly accessible.

Solana has also demonstrated its ability to handle finance at scale, offering 400ms block times for finality compared to Ethereum's longer durations, along with initiatives like Firedancer and local fee markets, further exemplify Solana's technological edge. The platform's emphasis on seamless transactions without the need for bridging or dealing with fractured liquidity, coupled with its application in real-world solutions like decentralized physical infrastructure (DePIN), positions Solana as a leader in the blockchain space.

That’s not to say that Solana is guaranteed to surpass Ethereum, or even Bitcoin, but it does mean that Solana is no longer an underdog. And perhaps before any altcoin gets a spot ETF, Solana will have one of its own that will bring greater competition to the blockchain space.

Read More
Christos Makridis Christos Makridis

How much longer can indebted Americans keep buying crypto?

This article was originally published on Cointelegraph.

Despite many seemingly positive reports about retail spending or the unemployment rate in the United States, the nation continues to battle several structural challenges that have only grown more severe, with a historic $34 trillion in public debt and a similar high of $1.13 trillion in consumer credit card debt. Alexander Hamilton is famous for remarking that the "national debt, if it is not excessive, will be to us a national blessing," but the scale of current debt raises questions about the sustainability of fiscal policies and their long-term economic impact.

Concerns about the public debt used to be more of a fringe topic that conservatives and libertarians argued about. However, recent remarks by leading figures in the banking sector underscore the gravity of the situation. JPMorgan Chase CEO Jamie Dimon's warning of a global market "rebellion," Bank of America CEO Brian Moynihan's call for decisive action, “The Black Swan” author Nassim Taleb's "death spiral" prognosis, and former House Speaker Paul Ryan's description of the debt crisis as "the most predictable crisis we’ve ever had" highlight the urgent need for a reassessment of the United States' fiscal trajectory.

The public's growing anxiety over government debt, with 57% of Americans surveyed by the Pew Research Center advocating for its reduction, reflects a shift in societal priorities towards fiscal responsibility. This concern gains further significance in light of its real-world implications, notably on housing affordability and the broader economic landscape. The precarious state of the housing market, exacerbated by rising interest rates, epitomizes the link between fiscal policy and individual economic prospects: as public debt grows, so too do interest rates.

The public's growing anxiety over government debt — with 57% of Americans surveyed by the Pew Research Center advocating for its reduction — reflects a shift in societal priorities towards fiscal responsibility. This concern gains further significance in light of its real-world implications, notably on housing affordability and the broader economic landscape. The precarious state of the housing market, exacerbated by rising interest rates, epitomizes the link between fiscal policy and individual economic prospects: as public debt grows, so too do interest rates.

The global standing of the U.S. dollar, serving as a "convenience yield," plays a pivotal role in the country's ability to manage its substantial debt without immediate negative consequences. However, a recent working paper released through the National Bureau of Economic Research finds that the loss of the dollar's status could amplify the debt burden by as much as 30%. This revelation underscores the imperative to critically evaluate the nation's fiscal direction.

The challenge in the nation — and many other developed countries — reflects what is going on for many consumers. Americans have increasingly turned to their credit cards without paying down the balance to cover regular expenses. A new report released through the New York Federal Reserve, for instance, shows that total credit card debt increased by $50 billion (or 4.6%) to $1.13 trillion from the previous quarter, according to the report, marking the highest level on record in Fed data dating back to 2003 and the ninth consecutive annual increase.

The New York Fed report also shows an uptick in borrowers who are struggling with credit card, student and auto loan payments. For example, 3.1% of outstanding debt was in some stage of delinquency in December — up from the 3% recorded the previous quarter, although it was still down from the average 4.7% rate seen before the Covi-19 pandemic began.

"Credit card and auto loan transitions into delinquency are still rising above pre-pandemic levels," said Wilbert van der Klaauw, economic research advisor at the New York Fed. "This signals increased financial stress, especially among younger and lower-income households."

An important strategy for retail investors during periods of uncertainty is to diversify. But how you diversify matters. Investing in the S&P 500 is good, but if all your savings are locked up in the S&P 500 and it plummets, then you’re in trouble. While it is true that, even if a plunge took place in the next year, the S&P 500 will rebound, but you still have to weather the storm.

An additional strategy is to have some exposure to crypto. Many people focus on Bitcoin, Ethereum, and other digital currencies.But at least equally important — if not more — for long-run value creation in the digital assets market is hash rate, which reflects how much activity is taking place on a blockchain. Bitcoin, for instance, has had a sustained increase in the hash rate alongside its price appreciation.

The upcoming year is an important one with substantial macroeconomic risks for both the nation and the consumer. Although some economic reports have been positive, we need to pay attention to the fundamentals and whether the data reflects transitory versus permanent shocks. The challenge for policymakers is to craft fiscal policies that foster sustainable growth and productivity, steering clear of scenarios where short-term fiscal expediencies precipitate long-term economic liabilities. The current path, however, mirrors the predicament of a borrower trapped in a cycle of debt, with interest rates surpassing their monthly income.

Let’s help make 2024 a transformational year for the better!

Read More
Christos Makridis Christos Makridis

Understanding patterns of cyber conflict coverage in media

This article was originally published on Binding Hook (with Lennart Maschmeyer and Max Smeets).

In February 2014, the cyber threat intelligence community was stirred by the discovery of ‘The Mask‘, a highly advanced hacking group thought to be backed by a national government. This group had been targeting a range of entities, including government agencies and energy companies. Kaspersky Lab, a Russian cyber security company, described their activity as the world’s most advanced APT (Advanced Persistent Threat) campaign. However, despite the sophistication of The Mask, which had been active since at least 2007, its media coverage was surprisingly limited, failing to make significant headlines.

Fast forward to 2018, when Kaspersky Lab reported on Olympic Destroyer, the cyber attack that disrupted the 2018 Olympics, paralyzing IT systems and causing widespread disruption. This incident garnered immediate and extensive media coverage, with over 2000 news stories published, showcasing a stark contrast in the media’s approach to reporting cyber operations.

These two cases highlight a critical and intriguing question: Why do some cyber operations receive extensive media attention while others do not? It is important because media reporting shapes how the public and policymakers perceive the cyber threat landscape. 

Yet, there has been a surprising lack of analytical research addressing why some cyber operations attract more media attention than others. Until now, our understanding has largely been shaped by anecdotal evidence rather than systematic analysis.

Our recently published academic article in the Journal of Peace Research begins to tackle this question by introducing a comprehensive collection of cyber operations reports derived from commercial threat intelligence providers, which are often the primary sources for journalists. Using multivariate regression, we identify the characteristics that correlate with the extent of media reporting on cyber operations. 

Four tests

First, we explored the intensity of effects produced by cyber operations. Historically, violent and shocking news stories have garnered more attention, encapsulated in the adage, ‘if it bleeds, it leads.’ We hypothesized that the more intense and threatening the effects of a cyber operation, the greater the media coverage it would receive. Our findings revealed that disruptive and destructive cyber operations generate more news stories than their espionage counterparts. However, while cyber effect operations receive more coverage than espionage, this result is not statistically significant.

Next, we examined the type of target involved in cyber operations. Previous assumptions paralleled the media coverage of cyber operations with terrorism, where attacks on more politically or symbolically significant targets garner more attention. However, our research indicates a different pattern. We found that operations targeting the military or financial sectors actually generate less media coverage.

The third aspect we considered is the perceived sophistication of cyber operations. The media often gravitate toward stories that are easily understandable and remarkable. In this context, we expected cyber operations employing zero-day exploits, an easily observable indicator of sophistication, to receive more coverage. Our research supports this expectation, showing a significant increase in media stories for cyber operations that use these advanced techniques.

Lastly, we investigated the origin of the threat. Previous studies in communications have highlighted a media tendency toward bias against those outside the audience’s primary demographic, often leading to an exaggerated portrayal of non-white individuals in terrorism-related news.

Extending this insight to the realm of cyber threats, we anticipated a similar pattern, with adversarial threats groups being overrepresented in media narratives. This presumption aligns with past research, which observed that operations attributed to Russia, China, Iran, and North Korea tend to receive more attention.

However, our research does not find a significant correlation between media coverage and cyber operations attributed to key adversaries of Western powers, such as Russia, China, Iran, and North Korea.

Double bias

Our findings reveal a ‘double bias’ in media reporting on cyber operations. This bias originates from the reporting practices of commercial threat intelligence firms, further skewed by media outlets’ preference for stories that resonate with their audiences. This layered selectivity results in a narrow and potentially distorted portrayal of cyber threats, influencing academic discourse and policy-making.

There is a fascinating trend to watch regarding the double bias. Traditionally, mostly Western cyber threat intelligence firms have publicly disclosed details on APTs. Kaspersky Lab, based in Russia, stands out as an exception. The company has also published on various Western covert cyber operations that haven’t been widely reported elsewhere. However, lately, Chinese cybersecurity companies have begun to publicly attribute cyber threat actors as well. If this trend continues, it will be intriguing to observe how the media reacts to these reports and how much they are taken as credible compared to reports from Western intelligence companies. 

Read More
Christos Makridis Christos Makridis

Unpacking the Myths of Employee Ownership

This article was originally published in Inc Magazine with Bill Fotsch.

Last year, Pete Stavros, a senior partner at KKR and the founder of Ownership Works, published an article in Fortune championing shared company ownership as the "missing path to the American dream." And for good reason--an increasing share of Americans believe the American dream has deteriorated, with only 19 percent reporting confidence that their children's generation will be better off than their own, according to a recent NBC poll. Pete's proposal received support not just from the labor advocates but also from the investment community, including major financial institutions. 

We, too, find common ground with Stavros, particularly in improving business results and the lives of the employees that drive those results. But we diverge when it comes to the implementation of employee ownership. The difference, as they say, is in the details--arguably, ones that make or break the success of employee ownership.

Recognizing the many benefits of employee ownership, our perspective emphasizes that it does not automatically produce the desired outcomes on its own. To rise to the American dream level, ownership must be earned, not simply given. In other words, ownership must be realized by gains in productivity and value-added; it cannot sustainably be given out in perpetuity. 

That raises a chicken-or-the-egg question. Let's go back to Corey Rosen's 1987 Harvard Business Review research, which revealed that ESOP (employee stock ownership plan) companies with participation plans grew three to four times faster than those without. The key word here is "participation." Rosen, an otherwise staunch supporter of employee ownership, did not shy away from revealing this detail. For the ESOP to thrive, employees must be involved in the plan and earn the reward.

That was over three decades ago; has the narrative changed?

Take, for instance, the Harvard Business School Working Knowledge article discussing how KKR's ownership model dramatically changed worker behavior and company success. It's a compelling narrative, but it may tempt readers toward an overly utopian view of ESOPs. An employee will not necessarily behave like an owner simply because they are given equity, no more than a pre-med student will behave like a doctor if given a unearned degree. In contrast, ownership is the fruit of stewardship and investment. 

Recent conversations with Gil Hantzsch, President of MSA, reminded us that giving employees ownership changes a company's form more readily than its function. MSA is a thriving ESOP company, yet when they attempted to pool resources and share best practices across their various branches, the ownership model did not automatically encourage collaboration, trust, or shared action. It wasn't until MSA introduced structured interactions--a series of 'flocking events', where subject-matter experts met in person to get to know one another, build trust and share insights--that their best-practices initiative gained traction. Here, the company equity had been in place for years, but was not wholly sufficient to affect behavior.  

If it's clear that a company would do better if its employees began to think and act like owners, and ownership at face-value does not transform employees, then what does? 

The genesis of a successful ESOP doesn't begin with the ESOP itself. It starts with cultivating a culture of ownership among employees, treating them as true partners in the mission to deliver value to customers and ensure sustainable profitability. Many ESOP successes lead back to this fundamental approach; companies such as MSA Engineering, Trinity Products, Springfield Remanufacturing, and Dorian Drake. And there is no shortage of successful companies that never had employee ownership as part of their arsenal; Southwest Airlines had profit sharing long before it had any employee equity program.  

When employees can see and understand the economics of the business, they learn how their day-to-day behaviors influence the bottom line. Then can innovate and contribute. When employees are actively in conversation with the customer, they have insight into what drives the business' value. As confidence builds, employees develop an eye toward long-term strategy. It's algebra, then calculus. This scaffolding ensures employees are prepared for the responsibility of ownership and can make the most of it.

Our five years of research on this management approach, called Economic Engagement, contains 8 waves of 50-150 companies per wave and is published in Inc. ("A Key Strategy to Double Your Profitable Growth"). It includes fifteen questions aimed at understanding drivers behind employee behavior and company success, one of which is employee ownership. While employee ownership is part of the equation, the existing body of research does not single it out as the ultimate driver of performance or employee well-being. It's the combination that produces superior results:

  1. Customer engagement is the starting point since customers define value and thus the economics of any business.  Insure that all employees have a window on what customers value on an ongoing basis, since customers change over time.

  2. Economic understanding aligns all employees in a common understanding of what defines success for the company that evolves from customer engagement and the value they are adding.

  3. Economic transparency enables all employees to see how the company is doing and learn from successes and failures.  

  4. Economic compensation gives all employees a shared stake in the results, making them economic partners in the company.  This ranges from wages, incentive compensation and long-term equity.

  5. Employee participation leads to lower turnover and better relationships between owners/managers and employees, by encouraging employees to actively participate in the business, often beyond their defined role.

At economically engaged companies, employees are immersed in the operational economics that power profitability--metrics like product shipments, monthly job margin dollars, and the acquisition of new customers. Employees learn to track and forecast these key numbers on a weekly basis. They're empowered to steer these numbers in a positive direction, while also reaping the rewards of enhanced performance. They're likely to forge long-lasting and fruitful careers, as well as source quality referrals. The environment elevates the participation of the employee to a level that transcends transactional ownership.

Employee ownership is good, but by itself, it has less impact on employee behavior. It's hard for employees to feel motivated by a potential benefit of an indeterminant amount, at some point in the distant future. It's hard enough to get employees to participate in 401K matching programs. Shared ownership is not a panacea; it's a tool. So, let's agree that we should improve business results and the lives of the employees that drive those results--by learning from each other, from employees, and from research.

Read More
Christos Makridis Christos Makridis

We studied 235 stocks–and found that ESG metrics don’t just make a portfolio less profitable, but also less likely to achieve its stated ESG aims

This article was originally published in Fortune.

Institutions have become increasingly skeptical about ESG ratings–and rightly so. In our recent research, we show how the inclusion of ESG metrics in assembling a portfolio can lead to unintended consequences.

After gathering the subset of stocks that were traded on a daily basis between 1998 and 2020 on the three major exchanges as well as ESG data, we quantitatively studied the inclusion of ESG metrics in two ways. First, we consider trading strategies that only rely on returns, rather than a combination of returns and ESG scores. We found that non-ESG rules that incorporate returns result in higher ESG scores, compared with ESG-based rules.

Second, we considered trading strategies that prioritize the stocks with the highest overall ESG score, reflecting the increased attention that ESG has received in recent years. We found that it does not result in the most efficient portfolio in terms of risk-adjusted returns. While including ESG data leads to portfolios with higher returns, it was at the cost of more volatility.

Our results may come as a surprise: Because of the noise inherent in ESG metrics, including them creates estimation risk and worsens the portfolio allocation. In fact, we find that the explicit targeting of ESG metrics leads to a portfolio allocation that is economically and environmentally worse than the market allocation. That is consistent with prior research that finds substantial disagreement among ESG ratings agencies due to their chosen ESG metrics, how they measure the metrics, and how they weight across the metrics in forming overall scores. Our results are also consistent with recent research that has shown how the inclusion of uncertainty associated with an ESG metric lowers financial returns.

It’s as if you are trying to hit a moving target–you will not only miss the target but also create a mess in the process. Even though the desire to achieve broader impact through ESG is good, the devil is in the details: the measurement and choice of metrics are enormously important, and the absence of clarity and consensus around them will introduce significant noise into investors’ portfolio choice conundrum.

To make further sense of these results and understand how the average American thinks about ESG matters, we surveyed a nationally representative sample of 1,500 people and asked them to rank 10 ESG topics. While we can only speak to the relative ranking of each topic, we find no statistical evidence that individuals believe companies should focus on other priorities besides maximizing shareholder value after accounting for their own ranking of ESG issues.

Furthermore, among those who personally rank issues such as climate change among the greatest priorities, they also recognize that it is not necessarily within a company’s objectives to do so. If anything, respondents tend to rank company objectives around paying a living wage higher than their own personal rankings of it. In this sense, whereas a frequent justification for active ESG policies is that people believe that companies should be doing more, our result says that it is just a reflection of peoples’ own preferences that they superimpose onto the company.

We also conducted a simple randomized experiment where we provided some respondents with information from a scientific study about the costs of renewable energy, in contrast to the control group, to gauge the impact of information on attitudes toward ESG. Then, we asked them about their support for renewable policies. We found that information exposure lowered their support, after learning about what often amounts to overlooked costs. This divergence between personal and organizational ESG objectives, combined with the muddled ESG scoring landscape, reiterates the potential pitfalls of heavily relying on these scores for investment decisions.

An essential takeaway is the need for a balanced approach. While ESG metrics can provide valuable insights into a company’s broader societal impact, they should be seen as a supplement, not a replacement, to traditional financial metrics. Investors should be wary of overemphasizing ESG at the expense of established measures that have stood the test of time.

Christos A. Makridis, Ph.D., is the founder and CEO of Dainamic Banking and holds academic affiliations at Stanford University, among other institutions.

Majeed Simaan, Ph.D., is a professor of finance and financial engineering at the School of Business at Stevens Institute of Technology.

Read More
Christos Makridis Christos Makridis

Researchers reveal the hidden peril of ‘labeling’ employees

This article was originally published on Fast Company.

In today’s hyper-competitive business landscape, the quest to quantify and categorize employee performance is more aggressive than ever. Consulting giants like McKinsey offer provocative frameworks that promise to neatly sort your workforce into boxes, “amplifying the impact of star performers” by identifying six distinct employee groups, or archetypes.

Such categorizations echo the controversial strategies of yesteryears, notably Jack Welch’s “Rank and Yank” policy. Remember how that worked out? Welch’s system had its moment in the sun, but it eventually fell from grace, proving to be a divisive and morale-crushing strategy.

Before you sign that consulting agreement and begin using their employee filtration tools, it’s worth pausing to consider the powerful psychological implications of labeling. We need to talk about the Pygmalion Effect—a concept that suggests these labels could be doing more harm than good.

The Pygmalion Effect refers to the concept that the labels we attach to people can influence their behavior in ways that confirm these labels. Imagine you label someone as a “disruptor.” Over time, not only will they start acting the part, but their managers and colleagues will treat them as such, reinforcing the behavior. In other words, the label becomes a self-fulfilling prophecy, locking individuals into roles that may not reflect their potential or future performance.

For instance, a sales manager who labels a team member as “low potential” might unconsciously offer fewer growth opportunities, affecting the employee’s performance and motivation to step up. Or consider how many talented employees might be pigeonholed into roles that don’t fully exploit their skills, simply because of a label slapped onto them during a performance review.

Here’s the kicker: Employees aren’t static entities. Their performance and engagement levels can change, often dramatically, in response to various factors like work environment and personal circumstances. Management practices alone are shown to affect productivity by around 20%. We’ve seen firsthand how an employee branded as a “value destroyer” turned into a key asset when engaged and motivated properly. To think that an employee’s worth can be permanently categorized is to misunderstand the dynamic nature of human capital.

We have seen how eschewing labels propels results for hundreds of consulting clients, including:

  • A U.K. manufacturer’s owner had labeled the head of their model shop a troublemaker, or “value destroyer” in McKinsey terms. Ignoring this, the owner solicited his input on how to improve the business. The model shop head generated profitable ideas, leading to increased earnings. He emerged a leader, or “thriving star” in McKinsey terms.

  • A Kentucky landscape company viewed its employees as hired hands, or “mildly disengaged” in McKinsey terms. Treating them like trusted partners, with a shared focus and an incentive to increase job margin per month, drastically improved productivity and profits, as well as innovation. One truck driver, running a snowplow, generated a new client on his own by plowing an unplowed parish parking lot, asking only that the pastor take a call from his company sales team. No one told him to do this. With focus and incentive, he transformed from disengaged to “reliable and committed.”

  • An urgent care business had come to assume they were stuck with debilitating turnover; “quitters,” McKinsey might suggest. But after examining exit interviews and addressing the common issues (particularly lack of management listening and acting on provider input), the “quitters” stopped quitting. Patient NPS scores soared, as did profits.

The real cost you pay when working with imprudent consultants isn’t their expensive fees, it’s the potential stifling of employee growth and innovation. When you label people, you’re not just putting them in boxes; you’re putting a ceiling on what they can achieve. And in today’s fast-paced business world, that’s a luxury no company can afford. It turns out, employees can and do change over time, something you can either enhance or stifle.

To be sure, there are some employees who are simply poor performers and not right for the job even when you work with them to explore a change in responsibilities.

Of course, one solution is to screen employees better. Some of our prior research, for example, has found that employees who demonstrate greater intellectual tenacity tend to perform much better than their counterparts, and their advantage in the labor market has grown over time as work has become more complex. One way to think about this result is that persistence and curiosity in the workplace are quintessential characteristics for not only problem solving, but also interpersonal dynamics. But hindsight is always 20-20 and the wrong candidates might pass through the screening.

Instead of spending resources on categorizing employees, why not invest in creating an environment that promotes positive behavior change? By focusing on behaviors rather than labels, companies become more growth-oriented and attentive to what can change.

This fosters a culture where employees are empowered to evolve and adapt, driving not just individual success but also organizational excellence. Our multiple waves of survey research—on what we broadly refer to as Economic Engagement—shows that when companies partner with employees to serve their customers profitably, behavior changes, and that in turn leads to greater profitability.

Economic Engagement isn’t your run-of-the-mill, feel-good company culture. Instead, it’s a well-structured, results-driven management system underpinned by transparency, a deep understanding of economics, and active employee involvement.

Employees aren’t just taught how to read a balance sheet. They’re immersed in the operational economics that power profitability, metrics like product shipments, monthly job margin dollars, and the acquisition of new customers. Employees learn to track and forecast these key numbers on a weekly basis. They’re empowered to steer these numbers in a positive direction, while also reaping the rewards of enhanced performance.

Employees at an economically engaged company are likely to forge a long-lasting and fruitful career there, as well as a source of quality referrals. The environment elevates the participation of the employee to a level that transcends categorization and engenders true engagement.

Before you take any steps to classify your workforce, consider other avenues for understanding and unlocking their potential. Sometimes, the smartest decision is to sidestep the labels and focus on cultivating a culture that brings out the best in everyone.

Christos A. Makridis is the CEO and founder of Dainamic, a financial technology startup, and a research affiliate at Stanford University. He holds doctorates in economics and management science and engineering from Stanford University. Follow @hellodainamic.

Bill Fotsch is a business coach, investor, researcher, and writer. He holds an engineering degree and an MBA from Harvard Business School and is the founder of Economic Engagement.

Read More
Christos Makridis Christos Makridis

The FDIC’s 2023 Risk Review shows the surprising resilience of community banks despite inflation and shifting interest rates

This article was originally published in Fortune.

The Federal Deposit Insurance Corporation (FDIC) recently released its Risk Review for 2023, detailing a substantial increase in unrealized losses–$617.8 billion in the last quarter of 2022 and $515.5 billion in the first quarter of 2023–driven in large part by “declines in medium- and long-term market interest rates.” If banks face a situation where they need liquidity and therefore have to sell investments at a loss, the depreciation of their portfolios could be a deathblow that renders many financial institutions insolvent.

While several asset quality indicators, such as the delinquency rate and nonconcurrent loan rate, remained favorable, the reality is that the banking sector is not in good shape–and declining macroeconomic conditions have exacerbated risk factors. Rising interest rates, coupled with inflation, have simultaneously affected bank balance sheets and consumer debt and expenditures. However, there is an important silver lining in the report: Community banks have fared much better than their larger counterparts–and helped sustain small business lending.

One important metric for gauging the health of a bank is its net interest margin (NIM), which reflects the interest income generated from credit products, such as loans and mortgages, net of outgoing interest payments to holders of savings accounts and certificates of deposits. Although NIMs increased in the industry as a whole in 2022, the increase was concentrated among community banks at 3.45%–up from roughly 3.25% in 2021. Since 2012, community banks have had roughly 0.5 percentage points higher NIMs than their non-community counterparts.

In addition, community banks have played a major role in supporting small business lending. Even though they only hold 14.7% of total industry loans, they accounted for 23.6% of total small business loans in 2022. Moreover, the increase in lending did not come with additional costs of higher risk: The commercial and industrial early-stage, past-due rate, and nonconcurrent rate for community banks actually declined at the onset of the pandemic and has remained low at around 0.5%, compared with roughly the double among non-community banks.

To better understand the health of the banking sector at a higher frequency, I launched a monthly nationally representative banking survey of 1,500 respondents in June. Consistent with these results from the FDIC Risk Review report, I found that individuals who borrow from smaller banks are much more confident in the safety of their deposits. For example, 34.5% of respondents who work with a small bank report that their bank is “rock solid” with “no concerns,” whereas only 26% of those with a medium-sized bank report similarly (and 33% among large banks). There is a growing recognition that small banks are better positioned to maintain the trust and loyalty of their borrowers because their interactions with customers are more frequent and their investments more prudent, particularly in their local communities.

One potential concern with these results is that differences in perception of risk simply reflect differences in the type of borrower. However, all results are robust to controlling for a wide array of demographic factors (age, race, education, marital status, employment status) and the respondents’ overall perception of the banking sector (not their own bank). Furthermore, those who work with a small bank are less optimistic about interest rates and central bank policy overall, so if anything, these results are overly conservative.

As 2023 comes to a close, it is important to remember the important role that small banks play in providing liquidity in the banking system–often with the least amount of risk. Due to their exposure to varying macroeconomic conditions, larger and mid-sized banks will need to pay greater attention to the quality of their assets and the health of their balance sheets.

Christos A. Makridis is the founder and CEO of Dainamic, a financial technology startup that empowers banks with regulatory compliance and forecasting software, in addition to serving as a research affiliate at several leading universities. Christos holds dual doctorates in economics and management science & engineering from Stanford.

Read More
Christos Makridis Christos Makridis

A Missing Link for Improving Education

This article was originally published in City Journal.

Republican presidential candidate Vivek Ramaswamy has said that “the nuclear family is the best form of governance known to mankind.” That notion has its critics, but it is increasingly shared by many across the political spectrum. Two recent books, for instance— Robert Cherry’s The State of the Black Family: Sixty Years of Tragedies and Failures and New Initiatives Offering Hope and Melissa Kearny’s The Two Parent Privilege: How Americans Stopped Getting Married and Started Falling Behind—present evidence that family dynamics influence a child’s life chances more than any other factor, including formal education. Unfortunately, state-level educational assessments and the National Assessment of Education Progress (NAEP) include no student information on family structure (for example, whether a student lives in a two-parent, single-parent, guardian, or foster household), making it harder to pursue data-driven educational interventions.

In our book, The Economics of Equity, we discuss state-level policy interventions to involve parents more effectively in their children’s education, to implement initiatives such as after-school programming targeted toward students most in need, and to help capable students of low socioeconomic status. These recommendations are especially important, given what we know about how students of low socioeconomic status spend substantially less time on educational activities outside of school. We cannot keep throwing more money at this problem; we have to address the root issue, which starts with the family. If schools could access data on family dynamics, they could craft more realistic parent-teacher-student-school responsibility agreements and create tiered intervention systems that take family capability and needs into account.

The good news is that this has been done before. For instance, one of us has written about how a school serving students from low-income families achieved Blue Ribbon status through the leadership of its principal. The principal’s key intervention was an afterschool program focusing on students with unstructured home environments. The principal was only able to make these determinations, however, thanks to teacher recommendations, not from an official database that tracked these kids’ home status.

Interventions based on students’ gender, race, class, learning disabilities, or English proficiency alone have led to many ineffective initiatives. Each of these characteristics is correlated with achievement gaps but is not their driving factor. In some cases, as with California’s new math requirements, officials are promoting initiatives that not only cost taxpayers dearly but also risk worsening achievement gaps.

Our book also summarizes the empirical evidence on charter and private schools, which have historically been better suited to parental involvement. Since parents must choose these schools, the schools can require a certain level of accountability from them. They can also create and adapt systems that meet parents’ needs without having to pass through the layers of bureaucracy and union battles common in public schools.

Unfortunately, the student data currently available to public school educators don’t help them address problems stemming from family status. Providing schools with data on family structure would give them a vital tool for addressing academic achievement gaps and improving educational outcomes.

Goldy Brown III is an associate professor at Whitworth University’s school of education, director of the university’s Educational Administration Program and a former school principal. Christos A. Makridis is a research affiliate at Stanford University, CEO/founder at Dainamic, and holds appointments in other institutions.

Read More
Christos Makridis Christos Makridis

The potential of AI systems to improve teacher effectiveness in Spokane public schools

This article was originally published in the Spokesman-Review (with Goldy Brown).

The United States K-12 education system has faced challenges for years, but has faced even greater headwinds recently following the pervasiveness of school closures and the resulting effects on student mental health and learning outcomes. Student test scores in math and reading fell to their lowest levels in 2022, according to the National Assessment of Educational Progress. These deteriorating outcomes warrant effective instruction from teachers across classrooms.

This year, Spokane Public Schools announced that it is pioneering a novel approach to evaluating and improving teacher effectiveness using AI systems. While AI is sometimes thought of as displacing jobs, it can also augment our productivity and learning. And as this school district in Spokane is exploring, AI systems can potentially help lower-performing teachers improve their quality of instruction at scale and embed greater consistency into teaching nationwide.

School districts often struggle with limited resources to provide continuous, quality training for their teachers, and bureaucratic impediments make removing ineffective teachers an arduous process. As a result, many students suffer under the instruction of teachers who, despite their best intentions, are ill-equipped to meet their educational needs. A large body of empirical research, in part led up by professor Eric Hanushek at the Hoover Institution, has pointed out that teacher quality is the single greatest impediment to learning outcomes.

The recent advances in large language models, such as Bard and ChatGPT, highlight the ways that AI can improve training and assessment of teachers at scale without having to involve principals and other training professionals for each individualized case. In particular, AI-powered platforms can provide a personalized, data-driven approach to teacher training.

By analyzing classroom data and building statistical models that predict learning outcomes as a function of teacher characteristics and inputs, these systems can offer real-time feedback and guidance, addressing teachers’ specific areas of weakness and offering them ways to improve. For example, if a teacher consistently struggles with engaging students or explaining complex topics, the AI could provide tailored strategies and methods to improve in these areas.

Moreover, AI-based coaching systems offer scalability and efficiency that traditional teacher training programs cannot match. Such a system can serve numerous teachers simultaneously, providing continuous support and learning opportunities. This continuous feedback loop would allow teachers to refine their skills constantly and adapt their teaching styles to their students’ evolving needs. Furthermore, AI systems would avoid putting further strain on the educational system that has already been stretched thin post-COVID.

While the potential benefits of AI in teacher coaching are vast, successfully implementing such a system requires careful consideration. An essential aspect of managing these AI systems is ensuring they are ethically used and respect teachers’ and students’ privacy. Confidentiality of data is paramount, and AI systems must be designed and regulated to ensure they comply with laws and ethical guidelines pertaining to data protection.

For example, our recent book, “The Economics of Equity in K-12 Education: A Post-Pandemic Policy Handbook for Closing the Opportunity Gap and Using Education to Improve the American Economy,” prominently features recommendations by professor Ryan Baker and University of Pennsylvania, emphasizing that the use of AI in education will require data sharing between schools and vendors using the latest advances in cryptography, like zero knowledge proofs, to secure sensitive information.

Additionally, AI systems need regular fine-tuning to remain effective and relevant. This process would involve an ongoing cycle of feedback from teachers, AI developers and education experts to ensure the AI evolves in line with the changing dynamics of classrooms and the educational landscape. For instance, updates in curriculum, pedagogical strategies and teaching methods should be reflected in the AI’s feedback and coaching suggestions.

Ultimately, AI is a tool, rather than a replacement for human connection and judgment: Decisions must remain with the educators and administrators. AI can provide data-driven insights and recommendations, but it’s the teachers and administrators who will interpret this information in the context of their unique classroom environments and make the final decisions.

The pioneering work by Spokane Public Schools represents a novel attempt to solve the longstanding challenge of the deterioration in student learning outcomes driven, at least in large part, by the decline in teacher quality and absence of incentives. With careful management and continuous refinement, AI systems could revolutionize teacher coaching, significantly improving the quality of K-12 education across the nation.

While challenges remain, the path forward shows immense promise, offering hope to educators and students alike.

Goldy Brown III is an associate professor in the Graduate School of Education at Whitworth University in Spokane . He is also the director of Whitworth University’s Educational Administration Program. He was a school principal for almost a decade. Schools that he administered earned four state recognition awards for closing the achievement gap between low-income and affluent students.

Christos A. Makridis is a research affiliate at Stanford University’s Digital Economy Lab and COO/co-founder of Living Opera, a multimedia startup focused on empowering and educating performing artists. He holds doctorates and masters degrees in economics and management science and engineering from Stanford University.

Read More
Christos Makridis Christos Makridis

Should we ban ransomware payments? It’s an attractive but dangerous idea

This article was originally published on Cointelegraph Magazine.

A successful cyberattack on critical infrastructure — such as electricity grids, transportation networks or healthcare systems — could cause severe disruption and put lives at risk. 

Our understanding of the threat is far from complete since organizations have historically not been required to report data breaches, but attacks are on the rise according to the Privacy Rights Clearinghouse. A recent rule from the United States Securities and Exchange Commission should help clarify matters further by now requiring that organizations “disclose material cybersecurity incidents they experience.”

As the digital world continues to expand and integrate into every facet of society, the looming specter of cyber threats becomes increasingly more critical. Today, these cyber threats have taken the form of sophisticated ransomware attacks and debilitating data breaches, particularly targeting essential infrastructure.

A major question coming from policymakers, however, is whether businesses faced with crippling ransomware attacks and potentially life threatening consequences should have the option to pay out large amounts of cryptocurrency to make the problem go away. Some believe ransoms be banned for fear of encouraging ever more attacks. 

Following a major ransomware attack in Australia, its government has been considering a ban on paying ransoms. The United States has also more recently been exploring a ban. But other leading cybersecurity experts argue that a ban does little to solve the root problem.

Ransomware and the ethical dilemma of whether to pay the ransom

At the most basic level, ransomware is simply a form of malware that encrypts the victim’s data and demands a ransom for its release. A recent study by Chainalysis shows that crypto cybercrime is down by 65% over the past year, with the exception of ransomware, which saw an increase. 

“Ransomware is the one form of cryptocurrency-based crime on the rise so far in 2023. In fact, ransomware attackers are on pace for their second-biggest year ever, having extorted at least $449.1 million through June,” said Chainalysis.

Even though there has been a decline in the number of crypto transactions, malicious actors have been going after larger organizations more aggressively. Chainalysis continued:

Big game hunting — that is, the targeting of large, deep-pocketed organizations by ransomware attackers — seems to have bounced back after a lull in 2022. At the same time, the number of successful small attacks has also grown.”

The crippling effect of ransomware is especially pronounced for businesses that heavily rely on data and system availability.

The dilemma of whether to pay the ransom is contentious. On one hand, paying the ransom might be seen as the quickest way to restore operations, especially when lives or livelihoods are at stake. On the other hand, succumbing to the demands of criminals creates a vicious cycle, encouraging and financing future attacks.

Organizations grappling with this decision must weigh several factors, including the potential loss if operations cannot be restored promptly, the likelihood of regaining access after payment, and the broader societal implications of incentivizing cybercrime. For some, the decision is purely pragmatic; for others, it’s deeply ethical.

Should paying ransoms be banned?

The increasing incidence of ransomware attacks has ignited a policy debate: Should the payment of ransoms be banned? Following a major ransomware attack on Australian consumer lender Latitude Financial, in which millions of customer records and IDs were stolen, some have begun to advocate for a ban on paying the ransom as a way of deterring attacks and depriving cybercriminals of their financial incentives. 

In the United States, the White House has voiced its qualified support for a ban. “Fundamentally, money drives ransomware and for an individual entity it may be that they make a decision to pay, but for the larger problem of ransomware that is the wrong decision… We have to ask ourselves, would that be helpful more broadly if companies and others didn’t make ransom payments?” said Anne Neuberger, deputy national security advisor for cyber and emerging technologies in the White House.

While proponents argue that it will deter criminals and reorient priorities for C-suite executives, critics, however, warn that a ban might leave victims in an untenable position, particularly when a data breach could lead to loss of life, as in the case of attacks on healthcare facilities.

“The prevailing advice from the FBI and other law enforcement agencies is to discourage organizations from paying ransoms to attackers,” Jacqueline Burns Koven, head of cyber threat intelligence for Chainalysis, tells Magazine.

“This stance is rooted in the understanding that paying ransoms perpetuates the problem, as it incentivizes attackers to continue their malicious activities, knowing that they can effectively hold organizations hostage for financial gain. However, some situations may be exceptionally dire, where organizations and perhaps even individuals face existential threats due to ransomware attacks. In such cases, the decision to pay the ransom may be an agonizing but necessary choice. Testimony from the FBI recognizes this nuance, allowing room for organizations to make their own decisions in these high-stakes scenarios, and voiced opposition to an all out ban on payments.” 

Another complicating factor is that an increasing number of ransomware attacks, according to Chainalysis, may not have financial demands but instead focus on blackmail and other espionage purposes. 

“In such cases, there may be no feasible way to pay the attackers, as their demands may go beyond monetary compensation… In the event that an organization finds itself in a situation where paying the ransom is the only viable option, it is essential to emphasize the importance of reporting the incident to relevant authorities.” 

“Transparency in reporting ransomware attacks is crucial for tracking and understanding the tactics, techniques and procedures employed by malicious actors. By sharing information about attacks and their aftermath, the broader cybersecurity community can collaborate to improve defenses and countermeasures against future threats,” Koven continues.

Could we enforce a ban on paying ransomware attackers?

Even if a ban were implemented, a key challenge is the difficulty in enforcing it. The clandestine nature of these transactions complicates tracing and regulation. Furthermore, international cooperation is necessary to curb these crimes, and achieving a global consensus on a ransom payment ban might be challenging. 

While banning ransom payments could encourage some organizations to invest more in robust cybersecurity measures, disaster recovery plans and incident response teams to prevent, detect and mitigate the impact of cyberattacks, it still amounts to penalizing the victim and making the decision for them.

“Unfortunately, bans on extortions have traditionally not been an effective way to reduce crime — it simply criminalizes victims who need to pay or shifts criminals to new tactics,” says Davis Hake, co-founder of Resilience Insurance who says claims data over the past year shows that while ransomware is still a growing crisis, some clients are already taking steps toward becoming more cyber-resilient and able to withstand an attack. 

“By preparing executive teams to deal with an attack, implementing controls that help companies restore from backups, and investing in technologies like EDR and MFA, we’ve found that clients are significantly less likely to pay extortion, with a significant number not needing to pay it at all. The insurance market can be a positive force for incentivizing these changes among enterprises and hit cybercriminals where it hurts: their wallets,” Hake continues.

The growing threat and risk of cyberattacks on critical infrastructure

The costs of ransomware attacks on infrastructure are often ultimately borne by taxpayers and municipalities that are stuck with cleaning up the mess.

To understand the economic effects of cyberattacks on municipalities, I released a research paper with several faculty colleagues, drawing on all publicly reported data breaches and municipal bond market data. In fact, a 1% increase in the county-level cyberattacks covered by the media leads to an increase in offering yields ranging from 3.7 to 5.9 basis points, depending on the level of attack exposure. Evaluating these estimates at the average annual issuance of $235 million per county implies $13 million in additional annual interest costs per county.

One reason for the significant adverse effects of data breaches on municipalities and critical infrastructure stems from all the interdependencies in these systems. Vulnerabilities related to Internet of Things (IoT) and industrial control systems (ICS) increased at an “even faster rate than overall vulnerabilities, with these two categories experiencing a 16% and 50% year over year increase, respectively, compared to a 0.4% growth rate in the number of vulnerabilities overall, according to the X-Force Threat Intelligence Index 2022 by IBM.

A key factor contributing to this escalating threat is the rapid expansion of the attack surface due to IoT, remote work environments and increased reliance on cloud services. With more endpoints to exploit, threat actors have more opportunities to gain unauthorized access and wreak havoc. 

“Local governments face a significant dilemma… On one hand, they are charged with safeguarding a great deal of digital records that contain their citizens’ private information. On the other hand, their cyber and IT experts must fight to get sufficient financial support needed to properly defend their networks,” says Brian de Vallance, former DHS assistant secretary.

Public entities face a number of challenges in managing their cyber risk — the top most is budget. IT spending accounted for less than 0.1% of overall municipal budgets, according to M.K. Hamilton & Associates. This traditional underinvestment in security has made it more and more challenging for these entities to obtain insurance from the traditional market.”

Cybersecurity reform should involve rigorous regulatory standards, incentives for improving cybersecurity measures and support for victims of cyberattacks. Public-private partnerships can facilitate sharing of threat intelligence, providing organizations with the information they need to defend against attacks. Furthermore, federal support, in the form of resources or subsidies, can also help smaller organizations – whether small business or municipalities – that are clearly resource constrained so they have funds to invest more in cybersecurity. 

Toward solutions

So, is the solution a market for cybersecurity insurance? A competitive market to hedge against cyber risk will likely emerge as organizations are increasingly required to report material incidents. A cyber insurance market would still not solve the root of the problem: Organizations need help becoming resilient. Small and mid-sized businesses, according to my research with professors Annie Boustead and Scott Shackelford, are especially vulnerable.

“Investment in digital transformation is expected to reach $2T in 2023 according to IDC and all of this infrastructure presents an unimaginable target for cybercriminals. While insurance is excellent at transferring financial risk from cybercrime, it does nothing to actually ensure this investment remains available for the business,” says Hake, who says there is a “huge opportunity” for insurance companies to help clients improve “cyber hygiene, reduce incident costs, and support financial incentives for investing in security controls.” 

Encouragingly, Hake has noticed a trend for more companies to “work with clients to provide insights on vulnerabilities and incentivize action on patching critical vulnerabilities.”

“One pure-technology mitigation that could help is SnapShield, a ‘ransomware activated fuse,’ which works through behavioral analysis,” says Doug Milburn, founder of 45Drives. “This is agentless software that runs on your server and listens to traffic from clients. If it detects any ransomware content, SnapShield pops the connection to your server, just like a fuse. Damage is stopped, and it is business as usual for the rest of your network, while your IT personnel clean out the infected workstation. It also keeps a detailed log of the malicious activity and has a restore function that instantly repairs any damage that may have occurred to your data,” he continues.

Ransomware attacks are also present within the crypto market, and there is a growing recognition that new tools are needed to build on-chain resilience. “While preventative measures are important, access controlled data backups are imperative. If a business is using a solution, like Jackal Protocol, to routinely back up its state and files, it could reboot without paying ransoms with minimal losses,” said Eric Waisanen, co-founder of Astrovault.

Ultimately, tackling the growing menace of cyber threats requires a holistic approach that combines policy measures, technological solutions and human vigilance. Whether a ban on ransom payments is implemented, the urgency of investing in robust cybersecurity frameworks cannot be overstated. As we navigate an increasingly digital future, our approach to cybersecurity will play a pivotal role in determining how secure that future will be.

Emory Roane, policy counsel at PRCD, says that mandatory disclosure of cyber breaches and offering identity theft protection services are essential, but it “still leaves consumers left to pick up the pieces for, potentially, a business’ poor security practices.”

But the combination of mandatory disclosure and the threat of getting sued may be the most effective. He highlights the California Consumer Privacy Act.

“It provides a private right of action allowing consumers to sue businesses directly in the event that a business suffers a data breach that exposes a consumer’s personal information and that breach was caused by the business’ failure to use reasonable security measures,” Roane explains. That dovetails with a growing recognition that data is an important consumer asset that has long been overlooked and transferred to companies without remuneration.

Greater education around cybersecurity and data sovereignty will not only help consumers stay alert to ongoing threats — e.g., phishing emails — but also empower them to pursue and value more holistic solutions to information security and data sharing so that the incidence of ransomware attacks is lower and less severe when they do happen.

Bans rarely work, if for no other reason than enforcement is either physically impossible or prohibitively expensive. Giving into ransoms is not ideal, but neither is penalizing the entity that is going through a crisis. What organizations need are better tools and techniques – and that is something that the cybersecurity industry, in collaboration with policymakers, can help with through new technologies and the adoption of best practices.

Read More
Christos Makridis Christos Makridis

Data as Currency

This article was originally published in Wall Street Journal (with Joel Thayer).

America’s antitrust policies are stuck in the 1980s. That was when courts and regulators began relying on what’s called the consumer-welfare standard. Articulated in Robert Bork’s 1978 book, “The Antitrust Paradox,” the standard replaced classical antitrust analysis, which focused primarily on promoting competition. Courts and regulators are supposed to take into account a variety of consumer benefits, including lower prices, increased innovation and a better product quality.

But scholars, courts and regulators have ignored Bork’s multifaceted tests and obstinately focused on price alone. The result, 40 years later, is that a few tech giants have been able to dominate the market. The problem is that their offering of free services presents a new challenge for measuring anticompetitive harm and consumer welfare. If price alone is our measure, it’s hard to argue that free services are bad for consumers.

Legal analysts have difficulties applying nonprice factors to tech companies even when confronted with such demonstrations of monopoly as viewpoint-based censorship and imposing rents on developers of apps and ad tech—or even such demonstrations of actual consumer harm as privacy violations or pass-through costs on digital goods.

These tech platforms have enabled instant communication, e-commerce, information search and political engagement. In exchange for these services, customers provide data. In a new working paper, we argue that data is the new currency that these tech behemoths are capitalizing. Every click, every interaction and every transaction feeds the digital economy.

In this light, the concept of free services is misleading, because consumers do pay a price by giving away their data. Worse, they do so often without understanding the full implications. These facts demand recalibration of the consumer-welfare standard to protect consumers’ rights and promote competitive markets. Data is more than just a digital footprint. It is a resource that tech companies exploit to amass control and wealth. The power dynamics in this exchange remain unbalanced, with consumers often unaware of the value of their data.

Some courts and scholars have argued that these harms are speculative and difficult to quantify. But there is a metric by which we can more accurately measure whether consumer welfare is served by tech companies: the amount of data they collect in exchange for those free services. Our paper explains several methods for deriving the value of data, especially from financial markets and structural methods. In general, these methods look at the role data plays in the production of goods and services.

Google, for example, required few data points from users when it made its search service available in 1997. Today it requires near-constant access to its users’ geolocation, spending habits and time spent on other sites. A judge could evaluate whether Google is arbitrarily requiring its users to provide more data—akin to raising the price of a product—solely to avail itself of ad revenues and more market share. To do so would be to engage in anticompetitive behavior. Antitrust law doesn’t allow this type of behavior in any other context.

Consumers run the risk and gain little new benefit every time tech companies pilfer more data from them. Even with the increase in data they obtain, the quality of their services remains virtually unchanged. These companies collect this data with few safeguards. And thanks to their buying out or merging with other companies, they lack any meaningful competitors.

Big Tech has, in effect, made data a new currency, which functions as the basis of many Big Tech companies’ business models. In the face of today’s data-driven digital markets, the fact that data is currency should compel us to revisit how we think about antitrust harm and what constitutes a competitive tech market.

Read More
Christos Makridis Christos Makridis

Men over 45 are working fewer hours. New research

This article was originally published in Fast Company.

There are no shortages of anecdotes when it comes to people sharing strong opinions about remote work and its effects on productivity and the tendency to slack off. These narratives are important, but they may not tell the whole story. Fortunately, newly available data from the American Time Use Survey (ATUS) by the Bureau of Labor Statistics provides some insight.

MEASURING TIME SPENT IN DIFFERENT ACTIVITIES

The ATUS is the only federal survey providing data on the full range of nonmarket activities, including the amount of time people spend on paid work, childcare, volunteering, and socializing. Individuals in the ATUS are drawn from the sample of respondents in the Current Population Survey as they are exiting.

One of the major benefits of the ATUS is that it measures a wide array of activities, not just time at work, like many existing surveys. This allowed me in my research to differentiate between work, leisure, household chores, childcare, and more.

Another major benefit of the ATUS is that it collects detailed 24-hour time diaries in which respondents report all the activities from the previous day in time intervals. These records are not only more detailed but also more reliable than standard measures of time allocated to work available in other federal datasets that require respondents to recall how much they worked over the previous year or week. These diaries contain much less noise than typical survey results.

UNCOVERING CHANGES IN TIME USE AMONG REMOTE WORKERS

Drawing on ATUS data from 2019 to 2022 among employed workers between the ages of 25 and 65, my new research paper documents new trends on time use, distinguishing between those in more- versus less-remote work jobs.

To measure remote work, I use an index by professors Jonathan Dingel and Brent Neiman at the University of Chicago, reflecting the degree to which tasks in a given occupation can be done remotely versus in person.

WORK TIME SHRUNK BY NEARLY AN HOUR

The first main result is that time allocated to work activities declined by nearly an hour among remote workers in 2022, relative to their 2019 trend before the pandemic, and time allocated toward leisure grew by about 40 minutes. The remainder of the time appears to have gone toward activities that are not otherwise classified, which might reflect scrolling on social media.

Your first instinct might be that time at work, of course, declined, but that’s because people are simply spending less on their commutes. While that is true, it doesn’t explain the sustained decline in time at work and increase in leisure from 2020 to 2022.

Furthermore, I ran separate models to differentiate between “pure work” and “work-related activities”—the latter including travel time to work. All of the changes in time at work come from “pure work,” rather than other categories related to travel or other income-generating activities.

But what’s even more striking is that the decline in work and rise in leisure is concentrated among males, singles, and those without children. In fact, single males over the age of 45 in remote jobs experienced a nearly two-hour decline in time allocated to work in 2022, relative to 2019, and over an hour increase in time allocated to leisure. This demographic divergence demonstrates the heterogeneity in responses to remote work.

Compare these patterns with those among women and caregivers. I found that college-educated women allocated an additional 50 minutes per day to work in 2022, relative to 2019. Among non-college-educated women, there were no statistically significant changes. I also found a nearly 30-minute-per-day increase in work among women with children. At least some of that increase in work is coming from a decline in home production activities, such as taking care of children and doing chores around the house, among the college-educated women.

IMPLICATIONS FOR PRODUCTIVITY AND THE LABOR MARKET

Do these results on remote work—especially for single males—simply reflect the phenomenon of quiet quitting, where employees disengage from work while remaining employed?

While more research is needed, the short answer appears to be no. In fact, I found that remote workers reported higher satisfaction with their lives and felt better rested. Remote workers also did not report more time allocated toward interviewing for other jobs. Cumulatively, these facts imply that changes in time use—at least since 2019—are not driven by disengagement.

These results have important implications for the debate about productivity. My other research has found that hybrid work arrangements may offer the best of both worlds.

For example, my research with Jason Schloetzer at Georgetown University using data from Payscale shows that the positive relationship between remote work and job satisfaction is statistically significant for hybrid workers only after accounting for differences in corporate culture. And even then, corporate culture dwarfs the economic significance of remote work.

Similarly, my work with Raj Choudhury, Tarun Khanna, and Kyle Schirmann at Harvard Business School using data from a randomized experiment in Bangladesh shows that workers on a hybrid schedule—working some days at home and some in the office—are more creative, send more emails, and feel like they have a better work-life balance relative to their fully remote or fully in-person peers.

It’s clear that remote work is not a one-size-fits-all phenomenon. While there are many benefits of remote work that come in the form of breaking down barriers and heightened flexibility, there are also new challenges that must be managed.

Crucially, we must be responsible to put into practice the right habits and processes to manage our time so that it does not drift away. Business leaders should help inculcate a culture of excellence by focusing on outcomes—not simply measures of hours worked—and lead by example.

Read More
Christos Makridis Christos Makridis

The transformative role of water markets for a climate-changed future

This article was originally published in the Global Water Forum.

Water markets provide a mechanism for the efficient allocation of water resources based on market principles. In a water market, water rights can be bought and sold, allowing water to flow from areas of low value to areas of high value. Could this mechanism also play a significant role in addressing the challenges of transboundary water governance?

Enhancing efficiency

Newly released research published in the American Economic Review by Professor Will Rafey at the University of California Los Angeles provides valuable insights into the functioning and benefits of water markets (Rafey, 2023). Drawing on data from the largest water market in history, located in southeastern Australia, Rafey finds that water trading increased output by 4-6% from 2007 to 2015, equivalent to avoiding an 8-12% uniform decline in water resources. This indicates that water markets can significantly enhance the efficiency of water allocation and usage.

While there is a large body of research attempting to estimate the value of trading water rights, most studies have run into at least three challenges. First, there are practical realities that are tough to model with river systems, such as the costly and uncertain flow constraints. Second, there are also geographic and hydrological constraints, including changes in the ecosystem and climate that affect supply and behavior. Third, the set of feasible trades in the water network are subject to many constraints, such as the cost of moving water and the direction it flows.

Rafey takes a two-step approach that begins by estimating the production functions for water, which map irrigation volumes into agricultural output using producer-level longitudinal data on irrigation, physical output, and local rainfall. To address the traditional concern that some farms might be systematically more productive than others, thereby confounding the relationship between inputs and outputs due to unobserved differences, Rafey leverages the longitudinal nature of the data and the heterogeneity in how water sharing rules, also known as diversion formulas, evolve nonlinearly across space and time. Crucially, these diversion rules are not within the control of any individual farm, so they provide an external stimulus to study how output evolves. Then, Rafey links the water trading data with the production functions to estimate the realized value of trades, thereby sidestepping having to parameterize and specify the set of feasible trades and all the many constraints that go into water systems.

Policy implications

Rafey’s research is important for both methodological reasons and policy guidance. Methodologically, it shows how to estimate the value of trading in a setting where there is substantial stochasticity, absence of a complete market, and dynamic game-theoretic interactions without having to specify all these ingredients explicitly in the model. Instead, the two-step approach allows him to flexibly estimate the value of water trading.

In respect of policy, his results suggest:

  • There is growing institutional, including governmental, support for water markets. While market power and other frictions may exist, water markets have been proven to raise allocative efficiency. The estimated total gains from trade provide a lower bound on the value of maintaining the infrastructure required for water markets.

  • Australia’s experience with setting up and running water markets provides a template for other countries. They demonstrate that efficiency gains are possible using modern monitoring technology in an arid region. The extent of a river system’s underlying hydrological variability, which can be measured directly from historical river inflows and rainfall, is identified as an important source of water markets’ prospective value.

  • Especially in the presence of climatic change, water rights can play a substantial role in facilitating adaptation. Efficient annual trade should reallocate water from places of relative abundance to places of relative scarcity, lowering the costs of idiosyncratic variability across the river trading network. By increasing the productive efficiency of a basin’s aggregate water endowment, a water market makes drier years less costly, helping irrigators adapt to aggregate shocks. “Without water reallocation through the annual market, output would fall by the same amount as if farms faced a uniform reduction in water resources of 8–12 percent. By comparison, government climate models for this region predict surface water resources to decline by 11 percent in the median year under a 1°C increase in temperature by 2030,” said Rafey.

Although we have long known that water markets are important mechanisms for ensuring the efficient allocation of water resources, we have not known how much and how they depend on different conditions, such as varying diversion rules and a changing climate. This research provides the latest comprehensive evaluation on the importance of water markets and their value in the years ahead to help manage scarce resources in a stochastic world.

The role of water markets in transboundary governance?

Transboundary water governance is a complex social, political, and economic issue involving the management and allocation of water resources across political boundaries. It is a critical aspect of international relations, as water is a vital resource that is unevenly distributed across the globe. The governance of these resources is fraught with at least two major challenges.

First, water is a shared resource that does not respect political boundaries. Rivers, lakes, and aquifers often span across multiple countries, making it challenging to manage and allocate these resources equitably. Furthermore, the governance of transboundary water resources involves a multitude of stakeholders (eg, governments, local communities, non-governmental organizations, and private entities) each with different interests, priorities, and perceptions of how water resources should be managed, leading to conflicts and disagreements.

Second, the governance of transboundary water resources is further complicated by climate change, population growth, economic development, and more. These factors increase the demand for water and exacerbate the challenges of managing and allocating these resources.

The creation of water markets has the potential to help water managers meet these challenges by allocating supply and demand efficiently and quickly without central planning and in the face of a wide array of uncertainty, ranging from climatic change to macroeconomic shocks. Water managers and policymakers across the world should work together to build upon the successful lessons learned from Australia’s example in the Murray-Darling Basin.

Read More
Christos Makridis Christos Makridis

Single, Remote Men Are Working Less

This article was originally published in City Journal.

The Covid-19 pandemic utterly transformed the world of work. But while employees across the globe have adapted to conducting business from their living rooms, CEOs and business leaders have struggled with this seismic shift, openly voicing their concerns about the impact of remote work on productivity, employee engagement, and corporate culture.

Some business leaders have come out strongly against working from home. “Remote work virtually eliminates spontaneous learning and creativity, because you don’t run into people at the coffee machine,” said Jamie Dimon, CEO of JPMorgan Chase. Others are more optimistic: “People are more productive working at home than people would have expected,” said Mark Zuckerberg, CEO of Facebook. And still others remain cautious: “Working from home makes it much harder to delineate work time from personal time. I encourage all of our employees to have a disciplined schedule for when you will work, and when you will not, and to stick to that schedule,” said Dan Springer, CEO of DocuSign.

But what do the data actually say? I recently released a paper, “Quiet Quitting or Noisy Leisure? The Allocation of Time and Remote Work, 2019-2022,” which documents trends by drawing on the latest data from the Bureau of Labor Statistics’ American Time Use Survey (ATUS).

Since there is no direct measure of fully remote, hybrid, or fully in-person work arrangements in the ATUS, I focus on an index, introduced in 2020 by the University of Chicago’s Jonathan Dingel and Brent Neiman, that measures the degree to which tasks within an occupation can be done remotely.  The index also happens to do a good job of identifying what sorts of jobs people are probably working remotely in—with the caveat that an employee at a company in Texas could differ in their work arrangement from a New York worker with the same occupation but a different employer.

I discovered three things. First, remote workers allocated roughly 50 minutes less per day to work activities and 37 more minutes per day to leisure activities in 2022, relative to 2019. Time allocated to home production, such as chores and caring for other household members, did not change.

Second, and perhaps more importantly, these declines are concentrated among males, singles, and those without children. In fact, single males over the age of 45 working remotely spend more than two hours less per day in work activities in 2022, relative to 2019. If anything, college-educated females are the ones who have increased their time at work slightly.

Third, changes in the allocation of time cannot be explained by job-search activity or declines in well-being. If these declines in labor hours were driven by “quiet quitting,” then remote workers would be spending more time searching for other jobs or would feel worse about life overall.

These findings underscore the complexity of the remote-work revolution. It is not merely a binary shift from the office to the home but a complex reordering of our daily lives with far-reaching implications. For businesses, understanding these changes—and especially recognizing the challenges that different demographic brackets are struggling with—is critical for managing workforce expectations and productivity. As we navigate this new landscape, it’s essential to look beyond the surface-level changes and grapple with the deeper shifts in how we allocate our time.

Read More
Christos Makridis Christos Makridis

How Will the Rise of AI Impact White-Collar Jobs?

This article was originally published in Nasdaq.

There is growing fear that AI and other new emerging technologies will destabilize white-collar jobs. What are your thoughts?

While it is true that generative AI can displace many tasks — and that's true for any technology — the big question is what new tasks and workflows does it enable? Recently released research that I've conducted introduces an occupational measure of how much coordination is required within a job and how that relates with exposure to ChatGPT based on a new index that OpenAI came out with.

Occupations requiring more coordination have higher wages and are actually less exposed to generative AI, suggesting that generative AI might actually play an important role in breaking down barriers and easing the completion of complex work.

I've also published research looking at how the expansion of AI jobs within cities has impacted the average well-being, and found the effect has been positive particularly in cities with more professional services. The reason? Increases in productivity, such as real GDP and income.

But ultimately how these new technologies will affect employment and wages is a policy design choice. If governments impose heavy regulation and high taxes on labor, that encourages companies to substitute away from human labor towards capital to save on costs. And that's what we see in many European countries — high labor tax rates are correlated with higher capital to labor ratios, and that in turn leads to lower labor productivity growth and less of the total surplus in an economy going towards labor.

How will these technologies transform or impact white-collar jobs?

There is an open question about how generative AI will affect productivity, and whether it may accelerate income inequality and polarization in the labor market. Preliminary evidence from OpenAI indicates that 80% of the U.S. workforce could have at least 10% of their work tasks affected by the introduction of large language models LLMs), and 19% of workers may see at least 50% of their tasks impacted.

However, my research offers an optimistic view of generative AI by showing that occupations requiring greater degrees of coordination over complex work are less – not more – exposed to displacement by generative AI. In other words, generative AI might actually end up serving as a complement to labor in occupations requiring greater degrees of coordination. Since so much work requires tacit information that is not easy to formalize and systematize, large language models can digest vast quantities of information and convert it into actionable recommendations and instructions for other team members to review and act upon.

How can consumers and business best prepare for the rise of AI technology? What are some advantages and what are some risks?

My research has highlighted the importance of intellectual tenacity as a personality trait in becoming resilient to technological change. That requires perseverance and curiosity to thrive amid challenges and continue learning even after formal schooling ends. We have so many tools at our disposal for living more effectively and productively, but sometimes inertia keeps us doing things business as usual. A practical suggestion is to allocate some time every week to evaluating the inventory of work and strategizing internally — or even with ChatGPT as a sparring partner — about how to work smarter.

As demand and use cases around generative AI grows, what should investors keep in mind?

Investors should think about breakthrough innovations, rather than the marginal ones. The highest value companies, ranging from Tesla to Apple, were the companies that did things people thought were impossible. That means having a great understanding of pain points among consumers and an eye for solutions that are just crazy enough to work.

Investors should also place a premium on companies that have versatile and experienced management: startups and young founders can be great, and there are certainly many who succeed, but there are also many more who fail due to lack of experience and hubris that should prompt investors to be prudent when evaluating a team's likelihood of success in execution (and not just the novelty of the idea).

How will these new technologies impact different sectors?

Each sector has its own challenges and pain points. On one hand, healthcare is fraught with an insurance sector that charges high premiums and delivers low customer service, coupled with pharmaceutical companies that have a tendency to over medicate rather than encourage preventative behavior. In this sense, AI has the potential to personalize behavioral recommendations that are likely to improve health and well-being, as well as automate otherwise mundane and time intensive activities among insurers that would allow them to pass cost savings onto their customers.

On the other hand, education has been increasingly failing to deliver for students, as evidence by K12 math and reading test scores reaching their lowest levels in 2022 to a flattened college wage premium over the past 15 years.

AI has the potential to transform the delivery of educational services by personalizing learning to each person's unique learning styles. AI can also help break down barriers that may have traditionally discouraged an individual from continuing education. In short, AI is a general- purpose technology, and it will affect each sector differently.

Read More
Christos Makridis Christos Makridis

An Enduring Need for Choice

This article was originally published in City Journal with Goldy Brown III.

This year marks the 40th anniversary of the publication of “A Nation at Risk.” Released by Ronald Reagan’s education secretary Terrel Bell and prompted by the international underperformance of American students, the report challenged schools to make dramatic improvements. Education policy was to focus on standards, accountability, and equity. Yet after four decades and countless new initiatives, the record shows that choice is a critical, but neglected, factor for success.

Every state has some form of academic standards, but these alone have not improved student achievement or closed achievement gaps. The standards movement began to take shape in 1989, when President George H. W. Bush convened governors at the Charlottesville Education Summit to discuss educational programming. Bill Clinton would later introduce his education initiative, “Goals 2000,” when he became president in 1992, requiring states to make high school graduation requirements more rigorous. The most recent installment came in the form of Common Core, whose prescriptions were approved by 40 states only to be repealed by many.

The push for accountability and equity, meantime, ramped up in 2001, when closing the achievement gap became a federal mandate in George W. Bush’s 2001 No Child Left Behind law. The policy obliged schools to show “adequate yearly progress” on statewide reading and math assessments for all students and to close the gap between certain subgroups. Schools failing to meet these objectives were sanctioned. The Obama administration then mandated that states adopt Common Core standards and their own accountability measures in order to receive Race to the Top Program funding.

At the end of this 40-year effort, what has changed? Compared with other countries, the U.S. is not gaining academically. Domestically, gaps between racial and income groups have not just persisted but widened. In fact, student test scores in math and reading fell to their lowest levels in 2022, according to the National Assessment of Educational Progress. If the U.S.’s decentralized approach to education is going to be an asset, we need to learn what works and what doesn’t. Though states would seem to be a perfect laboratory for such healthy experimentation, things have often not worked out that way.

Our recent book, The Economics of Equity in K-12 Education, establishes best practices for states and local governments. Besides the family, the local community has the most significant influence over a child’s education and future. Local stakeholders make decisions about teacher pay, teacher training, curriculum, programming for local students, and budgets. These decisions vary by district.

Yet national trends have further stifled progress on local education policy. Our research finds that school closures led to a deterioration in parental mental health that ultimately affected students—even beyond the learning losses arising from remote instruction—and that many families decided to switch permanently to homeschooling, even after schools began reopening. That so many families decided to homeschool highlights the increasing preference for school choice. Indeed, until policymakers confront the unambiguous evidence that school choice can improve learning outcomes and close the achievement gap, they will repeat the same mistakes we’ve seen over the last 40 years.

What defense do children have from school boards, unions, or other forces that fail to look out for their best interests? A child’s education is one of the most critical indicators of future success. Families need options, especially now that the pandemic has subsided. More states are concluding that school choice is necessary; nationally, we need to expand the role of choice in educational policy. Choice is the only policy that can address our biggest challenge: helping a decentralized system meet the needs of our pluralistic nation’s population.

Read More
Christos Makridis Christos Makridis

Gary Vaynerchuk: Pop Culture And Innovation's Role In Business Growth

This article was originally published in Forbes.

Entrepreneur and social media icon Gary Vaynerchuk is redefining the rules of the game bringing culture, innovation, and business together. Building on a huge success from last year’s VeeCon 2022 held in Minneapolis, Vaynerchuk is going at it again, this time in Indianapolis between May 18-20 with a new lineup of speakers and talks.

Access to VeeCon, however, is tokengated for the VeeFriends community, the holders of his NFT collection consisting of 10,255 tokens. “Eventually, we will all interact with NFTs because they will be our airline tickets, membership cards, and more,” Vaynerchuk said.

Vaynerchuk's interest in digital assets has gone beyond mere fascination, seeing them as a pivotal mechanism for societal interaction and a transformative tool for brand-consumer relationships. These digital assets, unique by design and stored on the blockchain, offer a new way for businesses to engage their consumers, especially those deeply ingrained in the digital landscape. Vayner3 has been working with many brands to integrate Web3 strategies into their business model in a way that drives business efficiency and customer engagement.

A good example of their web3 strategy work is their collaboration with Crown Royal, advising and helping with the execution of their digital collectibles launch in November 2022. In particular, their “Chain of Gratitude” initiative aimed to spread generosity following Super Bowl Sunday, so they launched digital collectibles through a game where Crown Royal provided prizes that were awarded based on the sharing of gratitude from one person to another. “Our mission is to empower brands with cutting-edge strategies to navigate the next era of the internet, keeping them at the forefront of digital innovation and consumer attention,” said Avery Akkineni, president of Vayner3.

Vaynerchuk leverages the power of popular culture by bringing people together who normally would not interact with one another, but when they do have the potential to build lasting and impactful relationships. He understands that consumer behavior is a reflection of cultural trends, and by tuning into these trends, brands can effectively resonate with and serve their audience.

The roster in VeeCon is incredibly diverse, ranging from Tim Tebow to Arianna Huffington to Busta Rhymes. Vaynerchuk’s goal is not to attract a group of people who all think the same, but rather to expose people to different ideas and have serious conversations with people they may not normally meet. “Do you know how happy I am that people try things that they haven’t tried before this conference? It was the same thing in the wine business: people find a wine they like and they keep on drinking it. I want people to think about different stuff. I have only innovated by looking at random things from a different perspective and those were the breadcrumbs for innovation,” Vaynerchuk continued.

Last year, VeeCon 2022 brought together all genres of people, ranging from pure Gary fans who bought into NFTs to marketing executives at enterprises and foundations. “As someone who oversees internal comms for a community of orgs – over 1000 employees in all – I’m curious about ways to deploy NFTs as tools within an organization to foster culture, connection, and motivation… There’s an enormous opportunity to experiment in this space – it’s pretty wide open right now, and I want to be a trailblazer. My employer both sees the opportunity and encourages me to explore it, which is why I’m here. For folks like me who are interested in comms, marketing, innovation, and the consumer blockchain, VeeCon is basically the center of the universe,” said Rob Raffety, director of internal communications at Stand Together.

The textbook approach in business often gives lip service to understanding the consumer and competitive differentiation, but the implementation of the textbook approach breaks down in practice. “I hated school because it did not allow for serendipity and let people think. It was based on memorization,” said Vaynerchuk. Instead, VeeCon 2023, and Vaynerchuk’s media enterprises more broadly, encourage mixing, matching, and engagement with pop culture even when there is not an explicit destination in mind.

Love it or hate it, consider the rise of TikTok and the subsequent explosion of short-form video content across platforms. Vaynerchuk, an early adopter, leveraged this trend in his own media company by building out short-form content, advising businesses to do the same and harness the power of evolving consumer behavior for brand success.

Innovation has been a constant theme in Vaynerchuk's career. He understands that innovation isn't just about improving products or services—it's about shaping culture and identity—and it’s evident in the way technology and business are increasingly intertwined. Vaynerchuk contends that the evolving relationship between the two is not merely transactional. Instead, it's a symbiotic relationship where advancements in technology drive business innovation, and conversely, the demands of business push technology to new frontiers.

Vaynerchuk's unique perspective on web3, pop culture, cross-industry collaboration, and innovation positions him at the forefront of a new business era. His insights, coupled with his relentless drive, pave the way for new avenues of consumer engagement and business growth. In a world where technology and culture are in constant flux, his approach and implementation of VeeCon serves as a novel NFT use-case that business can learn from to thrive.

Gary Vaynerchuk’s vision is not just about predicting the future—it's about creating it. As he navigates the new business landscape with a keen eye on consumer trends and technological advancements, Vaynerchuk exemplifies the power of embracing change, fostering innovation, and leveraging the convergence of disciplines. His journey serves as a testament to the transformative potential of a truly innovative mindset in the world of business.

Read More
Christos Makridis Christos Makridis

Airdrops are great, but be aware of the risks

This article was originally published in Cointelegraph.

Airdrops have emerged as a powerful tool for token distribution, user acquisition and community building as the blockchain industry has grown. They provide a unique opportunity for projects to distinguish themselves, incentivize desired behaviors and foster long-term relationships with their user base. But the question remains: Do airdrops work?

Based on my prior research in the Journal of Corporate Finance, the answer — at least according to the data so far — is “yes.” But my new research with Kristof Lommers and Lieven Verboven highlights that their efficacy hinges on thoughtful design, clear objectives and strategic execution.

At the heart of a successful airdrop lies the careful selection of eligibility criteria and incentives. These criteria can range from simple (like owning a specific token) to more complex (like exhibiting certain behaviors on-chain), but they should be aligned with the airdrop’s objectives. For instance, if the goal is to reward loyal users, then the eligibility criteria could include users who have held a certain token for a specific period. Similarly, if the aim is to promote a new protocol, then the criteria could be interacting with it.

Incentives, on the other hand, can take various forms — from direct token rewards to exclusive access to new features or services. The key is to strike a balance between being attractive enough to engage users and remaining economically viable for the project. For example, the Blur airdrop integrated social media activity into its eligibility criteria. Instead of just providing tokens to existing users or holders of a certain token, Blur incentivized users to share the airdrop on social media platforms and encouraged referrals among their networks to gain extra tokens. This method not only broadened the reach of its airdrop but also fostered a sense of community as users actively participated in spreading the word about Blur.

Timing also plays a crucial role. Launching an airdrop too early in a project’s lifecycle might lead to token distribution among users who lack genuine interest, while a late-stage airdrop might fail to generate the desired buzz. The optimal timing often coincides with a project’s token launch, creating initial distribution and liquidity. As prior research by Yukun Liu and Aleh Tsyvinski highlighted, momentum in the market plays a big role in explaining token prices.

However, airdrops are not without their challenges. One of the most serious risks is Sybil attacks, where malicious actors create multiple identities to claim a disproportionate share of tokens. Mitigating this risk requires a blend of strategies, including upfront whitelisting of users, raising barriers to entry and implementing Sybil attack detection mechanisms.

Especially in the past two years, projects must take into account the regulatory environment. Although nonfungible tokens (NFTs) have been largely exempt from strict regulatory enforcement action by the Securities and Exchange Commission, fungible tokens have been more in their line of sight, and the distribution of tokens coupled with an expectation of future profit could increase legal risk. Given the regulatory gray zone around tokens, projects must ensure they’re not inadvertently issuing securities. And with most large blockchain networks being public, privacy concerns may arise, potentially revealing sensitive information about airdrop recipients.

So, how much of a token supply should be allocated to an airdrop? There’s no one-size-fits-all answer. A project’s unique goals and strategies should guide this decision. However, research indicates that teams allocate 7.5% of their token supply to community airdrops on average.

One of the often-overlooked aspects of airdrops is their potential to harness the power of network effects. By incentivizing sharing, airdrops can amplify their impact, attracting more users to a project’s ecosystem and creating a self-reinforcing cycle of growth and value creation.

A final consideration to keep in mind is the simplicity of the airdrop. Convoluted eligibility criteria will confuse people — even if it is intelligently and rationally designed. An airdrop should be a straightforward and enjoyable experience for users, particularly for non-crypto natives. Collaborating with wallet providers can simplify the process for such users, making the airdrop more accessible and attractive.

A good analogy is in the context of monetary policy. When the United States Federal Reserve articulates simple policy rules about how it will deal with inflation, and then sticks to them, markets react much more positively than when it deviates from rules. The same is true with airdrops: Design them carefully, but keep them simple and transparent.

Airdrops can indeed work wonders when designed and executed well. They offer an exciting avenue for projects to stand out in the crowded blockchain landscape, encouraging user engagement and community development.

But their success is not a matter of chance — it’s a product of thoughtful design, clear objectives and strategic execution. Especially as many potential airdrops loom on the horizon with Sei Network, Sui, Aptos and more, understanding and harnessing the power of airdrops will become increasingly crucial for projects aiming to thrive in this dynamic space.

Read More
Christos Makridis Christos Makridis

Opera Streaming Revolution: Boston Baroque, IDAGIO, And GBH Music Unveil Digital Innovation For Global Audiences

This was originally published in Forbes.

On the heels of its 50 year anniversary, Boston Baroque, in partnership with IDAGIO and GBH Music, premiered an incredible series of performances on April 20th of the opera Iphigénie en Tauride by Christoph Willibald Gluck with a digital streaming and will remain available for $9 per stream until May 21st. Under the musical direction of Martin Pearlman and leadership of Jennifer Ritvo Hughes, Boston Baroque has become a leading cultural institution and innovator.

Technological innovation in classical music

Although all sectors were adversely affected by the onset of Covid-19 and the associated lockdowns, none were more affected than arts and entertainment with over a 50% decline in employment between February 15 and April 25 2020, which remained in decline well into the summer and 2021, according to research published by the Brookings Institution. The same phenomenon held across other developed countries, according to the OECD. The loss in employment led to a significant deterioration in mental health and well-being among performers and other workers in cultural institutions, according to professors Samantha Brooks and Sonny Patel in a 2022 article published in Social Sciences & Humanities Open.

However, some cultural institutions responded to these challenges with substantial innovation. Following the onset of Covid-19, Boston Baroque began working with GBH – the leading multiplatform public media producer in America – to digitally stream performances across the world. GBH Music became a production collaborator and presenter of Boston Baroque, among other celebrated music organizations, allowing for a continuation of musical performances. Even though there was an overall increase in streaming of performances, GBH Music was unique, most notably with their excellent production quality, resembling an in-person experience as much as possible.

"When GBH Music first met with Boston Baroque to explore the presentation of opera in our Calderwood Studio, we agreed that the goal was to find a new, innovative way to present these amazing works by taking advantage of the technology and talents we have at our disposal. And the results have been exceptional. The in-studio and on-line experiences bring audiences closer to the music, to the singers, and helps breathe new life into this centuries-old music. The visual and aural connection between artists and audiences is unique. Seeing musical talent, stunning production values, high-quality audio all come together to benefit opera, is thrilling,” said Anthony Rudel, General Manager of GBH Music.

Now that in-person performances have resumed, digital streaming has become a complement, rather than a substitute, for Boston Baroque, enlarging their reach and strengthening their world-renown brand as a staple cultural institution. “The arts often don’t place enough value on how people want to consume what we have to offer—we miss out on key opportunities to grow revenue and reach… In a traditional performing arts business model, the opportunity for return on investment ends when the concert downbeat begins due to a bias for in-person performance. At Boston Baroque, we’ve used digital innovation to disrupt this core business model constraint, providing unique to market, compelling content that consumers value,” said Jennifer Ritov Hughes, the Executive Director of Boston Baroque.

Economists have long pointed towards technology as the primary driver of productivity growth in society, but whether it translates into improvements in well-being and flourishing depends on whether and how society integrates technology as a complement, not substitute, to humans.

“Through partnerships with GBH, IDAGIO, and others, we’ve built a model for developing and delivering content that audiences are asking for, while paying artists for their work in the digital concert hall. A reviewer once called what we do an ‘authentically hybrid experience,’ where we simultaneously deliver in person programming while capturing digital content of a high enough quality to monetize and distribute on platforms with a global reach… One year after going digital, our market grew from 4,000 regional households to 35,000 households in the US and the UK. At the close of our 22/23 season, we now have audiences in 55 countries on 6 continents and counting. We’re just beginning to explore the potential of digital—many possibilities lie ahead,” Hughes continued.

“When I founded Boston Baroque 50 seasons ago, it was the first period-instrument orchestra in North America, and so it was quite an experiment. Everything that has come since then—being the first period-instrument orchestra to perform in Carnegie Hall, being nominated for 6 GRAMMY® Awards for our recordings, and now streaming our concerts on six continents—has been the wonderful and unexpected outcome of a simple desire to make music in a free and authentic way,” said Martin Pearlman, the Founding Music Director.

Iphigénie En Tauride

Iphigénie en Tauride is a drama of the playwright Euripides written between 414 BC and 412 BC, detailing the mythological account of the young princess who avoided death by sacrifice at the hands of her father Agamemnon thanks to the Greek goddess, Artemis, who intervened and replaced Iphigenia on the altar with a deer. Now a priestess at the temple of Artemis in Tauris, she is responsible for ritually sacrificing foreigners who come into the land. The opera revolves around her forced ritualistic responsibility on the island, coupled with an unexpected encounter with her brother, Orestes, who she had thought was dead.

Led by stage director Mo Zhou and conductor Martin Pearlman, Soula Parassidis, a Greek-Canadian dramatic soprano, played the title role of Iphigénie on April 20, 21, and 23 in Boston, accompanied by an outstanding cast of distinguished international artists, including William Burden, Jesse Blumberg, David McFerrin, and Angela Yam, among others. The performance has amassed a wide arrow of glowing reviews.

Because the same opera is replayed many times over even within the same year, stage directors bear significant responsibility to bring a new perspective each time, particularly in an era with limited attention spans. “The classical music field is going through a schismatic change right now. As a practitioner and an educator, I ask myself and my students this question everyday: “In the age of Netflix and Hulu, how can we make our art form more accessible?” We’ve been putting ourselves on a high pedestal for a long time. If we do not adapt, we will gradually lose touch with the new generation of audiences, said Mo Zhou, the Stage Director.

In contrast to the more regietheater style where the director is encouraged to diverge from the original intentions of the playwright or operatist, this production stayed true to its roots, featuring Iphigénie in a beautiful gown and highlighting the intense pain that Iphigénie felt when she was asked to continue a sacrifice to the gods and subsequent intense joy when she discovered her brother, Orestes, was alive.

“I found this process of working on Iphigénie en Tauride with Boston Baroque, GBH and IDAGIO extremely fulfilling and refreshing. I think this production has presented a feasible formula where we keep the unique experience of the “live” performance, but also capture the ephemeral moments on stage and make it available to a broader audience across the world… In addition to thinking about the character building, stage composition and visual design like a conventional stage production, I also incorporated the notion of camera angles into my pre-blocking and design process, which shows in our end result. It demands a lot of advanced planning and the clarity of your dramatic and visual intention. It’s a beautiful collaboration between myself and our livestream director, Matthew Principe,” Zhou continued.

Future of the arts and the metaverse

IDAGIO has pioneered an incredible service for classical music and the performing arts, giving thousands more people across the world access to top-tier performances. “We are offering the infrastructure to any partner who is interested in sharing media content with audiences online. Recording and producing a concert is one thing. Distributing them and making them available to committed audiences around the world is another. That’s what we enable and what we love to do,” said Till Janczukowicz, CEO and Founder of IDAGIO.

The response to opening up in-person performances to digital audiences has been overwhelming. “IDAGIO has over 50,000 reviews on the App store averaging 4,7 / 5 stars. Users and artists love IDAGIO for many reasons, also because of our fair pay-out model: we remunerate by second and per user. This is as fair as you can get in audio streaming,” continued. In many ways, digital streaming of performances is an early use-case of metaverse applications that aim to provide users with more immersive experiences and connectivity between physical and digital assets.

While we have yet to see many truly immersive and fully-fledged metaverse use-cases, there is substantial interest from consumers and metaverse companies alike, particularly for changing the way that people engage with the arts by giving artists a more experiential mechanism of performing for and connecting with their audience.

“Rapper Royce 5’9” is a great example of this. Even with a successful 20-year career under his belt, he has sought out better platforms to engage with aspiring artists and his community. With Passage, he’s creating a beautiful 3D space called the Heaven Experience to host exclusive songwriting and studio sessions, interviews with music industry veterans, live performances, and more. These types of interactions simply wouldn’t be possible on something like a Zoom call or traditional livestream,” said Caleb Applegate, CEO of Passage.

During an era of intense technological change, the arts plays a more important role than ever and technology has the potential to augment, not replace, in-person performances.

Read More