Christos Makridis Christos Makridis

America must harness stablecoins to future-proof the dollar

This article was originally published in Fortune Magazine.

With Congress just passing the federal budget, lawmakers will have an opportunity to tackle long-term financial challenges outside of crisis mode. One such challenge—and opportunity—is the rise of stablecoins: privately issued digital tokens pegged to fiat currencies like the U.S. dollar. Stablecoins have rapidly grown into a hundreds-of-billions market, facilitating billions in transactions, but they’ve lacked a comprehensive U.S. regulatory framework​. Fortunately, Washington is signaling new openness to digital assets—evidenced by President Trump announcing the establishment of a strategic digital asset reserve for the nation​. Creating the requisite clarity will unlock a new era of competition and innovation among banks.

Stablecoins are a strategic extension of U.S. monetary influence. Around 99% of stablecoin volume today is tied to the U.S. dollar, exporting dollar utility onto international, decentralized blockchain networks. A stablecoin market with the right guardrails can strengthen the U.S. dollar’s dominance in global finance​. If people around the world can easily hold and transact in tokenized dollars, the dollar remains the go-to currency even in a digitizing economy. Recent congressional hearings echo this point—up to $5 trillion in assets could move into stablecoins and digital money by 2030, up from roughly $200 billion now​. If the U.S. fails to act, it risks “becoming the rust belt of the financial industry,” as one fintech CEO warned​

Other jurisdictions aren’t standing still: Europe, the U.K., Japan, Singapore, and the UAE are developing stablecoin frameworks​. Some of these could even allow new dollar-pegged tokens issued offshore​—potentially eroding U.S. oversight. In short, America must lead on stablecoins or get pressured by Europe’s Digital Euro and other central bank digital currencies (CBDCs) that threaten both the private banking ecosystem and individual sovereignty in their strictest form. My research, for example, shows that CBDCs to date have not had any positive effects on growing GDP or reducing inflation, but have had negative effects on individuals’ financial well-being.

Ideally, various regulated institutions—banks, trust companies, fintech startups—could issue “tokenized dollars” under a common set of rules. Before the 1900s, state governments had the primary authority over banking. While that led to fragmentation and problems, with the right federal architecture, blockchain allows banks to offer differentiated products and a version of what existed pre-1900—their own type of stablecoin that differs in security, yield, and/or other amenities—while still keeping the value pegged to the dollar. More broadly, there is a large body of academic research showing how stablecoins drive down transaction costs, speed up settlement times, and broaden financial inclusion through new services. 

In absence of federal action, we risk a patchwork of state-by-state rules or even de facto regulation by enforcement, which creates uncertainty for entrepreneurs and consumers alike. The Stablecoin Tethering and Bank Licensing Enforcement (STABLE) Act was introduced in the House in 2020, requiring any company issuing a stablecoin to obtain a bank charter and abide by bank regulations, including approval from the Federal Reserve and FDIC before launching a stablecoin, and to hold FDIC insurance or Federal Reserve deposits as reserves, making stablecoin issuers regulated like banks to protect consumers and the monetary system. 

Preventing government overreach

However, as House Financial Services Committee Chairman French Hill has said, the goal should be to modernize payments and promote financial access without government overreach​. Notably, Hill contrasted private-sector stablecoin innovation with the alternative “competing vision” of a government-run digital dollar (central bank digital currency) that could crowd out private innovation​. And, the STABLE Act could be too draconian, penalizing non-bank entities. To that end, the recent bipartisan effort in the Senate—the Guiding and Establishing National Innovation for U.S. Stablecoins Act of 2025 (GENIUS Act)—has gained momentum.

In practice, the GENUIS Act could allow a regulated fintech or trust company to issue a dollar stablecoin under state supervision, so long as it complies with stringent requirements mirroring federal bank-like rules on liquidity and risk. This kind of flexibility, paired with robust standards, can prevent market fragmentation by bringing all credible stablecoin issuers under a regulatory “big tent.” It would also prevent any single point of failure: If one issuer falters, others operating under the same framework can pick up the slack, keeping the system stable.

Critics often voice concerns that digital currencies could enable illicit activity. But in reality, blockchain technology offers more transparency, not less, when properly leveraged. Every transaction on a public blockchain is recorded on an immutable ledger. Law enforcement has successfully traced and busted criminal networks by following the on-chain trail—something much harder to do with cash stuffed in duffel bags. In fact, blockchain’s decentralized ledger offers the potential for even greater transparency, security, and efficiency​. 

Following the momentum from the White House, Congress has a running start on crafting rules that bring stability and clarity to this market now that the budget has passed. Lawmakers should refine and pass a comprehensive stablecoin bill that incorporates the best of both approaches—the prudential rigor of the bank-centric model and the innovation-friendly flexibility of a dual license system. Done right, stablecoin legislation will reinforce the dollar’s role as the bedrock of global finance in the digital age, unlock new fintech innovation and competition domestically, and enhance financial integrity.

Read More
Christos Makridis Christos Makridis

Changing Compensation Calls for Updated Social Contract

This article was originally published in Real Clear Markets.


Select Language

Changing Compensation Calls for Updated Social Contract

.

By Christos Makridis

March 13, 2025

AP

American workers are witnessing a profound shift in how they are compensated. A century ago, a job’s pay was almost entirely a paycheck, but now nearly a third comes as benefits like health insurance, retirement plans, and stock options. Moreover, the growth in benefits is concentrated among wealthier workers, leaving the average American behind in an era of rapid technological change, according to a recent working paper I co-authored with Adam Bloomfield and Travis Cyronek.

The policy focus has recently shifted towards the American worker. “The American Dream is rooted in the concept that any citizen can achieve prosperity, upward mobility, and economic security. For too long, the designers of multilateral trade deals have lost sight of this,” said Treasury Secretary Scott Bessent in a recent talk to the Economic Club of New York. The Trump Administration has pointed out that the average American worker has borne the bulk of the burden, consistent with a large body of empirical evidence on globalization and trade. That is not to say there have not been benefits from low prices, but we need to acknowledge the costs.

Our recent working paper points out that the burden on the American worker may be even more severe than previously thought: while we often talk about wages, total compensation – which also includes benefits – tells an even tougher story for the average worker.

At the turn of the 20th century, benefits made up virtually none of a worker’s compensation, but by the late 1990s, over a quarter of a typical worker’s compensation came from non-wage benefits​. Much of this transformation occurred in the post-World War II era: employer-funded “fringe” benefits soared from about 7% of worker compensation in 1947 to 18% by 1979, and now hover around a third of total compensation. While valuable to some workers, expensive benefits often go unused and can risk making workers feel locked into their jobs, while others lack even basic benefits.

On the one hand, benefits like health coverage and retirement contributions provide security and long-term value. On the other hand, many workers would prefer or urgently need higher wages instead of benefits they can’t readily spend. But our new paper shows that both wage and benefits growth for middle and low-income workers has lagged behind productivity.

Moreover, millions of low-wage workers get few or no benefits from their jobs – no health plan, no retirement account, no paid time off. As a result, the gap in overall compensation (wages + benefits) between high- and low-paid workers is even wider than the wage gap alone. A cashier or care aide might earn a bare minimum wage with no health coverage, while a higher-paid manager not only earns more per hour, but also gets thousands of dollars in insurance and pension contributions.  In other words, the people who can least afford out-of-pocket medical costs are the least likely to get health coverage through their jobs. 

To address the challenges posed by benefit-heavy compensation structures, we need to find ways of decoupling basic benefits from a single employer. The expansion of generative artificial intelligence has likely spurred greater self-employment, so now more than ever we need to think through ways of adapting labor market institutions to promote healthy growth. 

We also need to consider how to incentivize employers extending benefits to part-time and low-wage employees. This could involve tax credits for small businesses that provide health insurance or retirement plans to lower-paid staff, or penalties for large companies that leave most workers uncovered. Or, it could involve expanding employee stock ownership plans (ESOPs) that allow employees to reap the profits of the firm so that they can make their own choice on what benefits to purchase on the open market. 

The changing nature of compensation in America – from straight wages toward benefit-heavy packages – calls for an updated social contract. Without intervention, the benefits revolution will continue to bypass millions of workers, accelerating income inequality and social fragmentation. By modernizing policies to reflect how people are paid today, we can protect the dignity of work and strengthen the American workforce across all income levels. 

Read More
Christos Makridis Christos Makridis

Trump’s crypto reserve is being panned by crypto leaders. Here’s why it’s actually a good idea

This article was originally published in Fortune Magazine.

The recent announcement by the United States to establish a strategic crypto reserve, featuring Bitcoin, Ethereum, XRP, Solana (SOL), and Cardano (ADA) is a major milestone for national security and economic policy. By integrating these digital assets into a formal reserve, the U.S. not only fortifies its national security posture, but also strategically supports and leads the growth of the private digital asset market worldwide.

The announcement received criticism from some crypto leaders, such as Coinbase CEO Brian Armstrong, who had pushed for only including Bitcoin, and 8VC general partner Joe Lonsdale, a Trump supporter who argued the government should stay out of crypto. Some have also suggested that there was insider trading, but these accusations have been speculation so far. Do not forget that there is vast insider trading outside of crypto—so much that there’s even an app called Autopilot that allows retail users to replicate the trades of politicians.

We’ll get to the advantages of having a strategic reserve, but let’s pause on whether, if we have a reserve, it should just be Bitcoin. Armstrong is a laudable leader, and he makes an important point—that we should focus on Bitcoin because of its relative stability and strength. But blockchain is so much more than just Bitcoin. Other tokens have not been around as long, and thus their price volatility is greater, but that doesn’t make them any less strategic.

In fact, the newer generation of tokens often have more sophisticated consensus mechanisms and utility that they offer users, such as ETH supporting decentralized apps and XRP supporting cross-border transactions at scale. We cannot dismiss these because BTC was “first.”

Crypto reserve benefits

Let’s explore the upside of a strategic crypto reserve.

First, the establishment of a crypto reserve provides a hedge against escalating geopolitical risks. Historically, U.S. economic power has relied heavily on the dominance of the dollar, but this dominance has faced challenges—especially lately—from geopolitical rivals seeking alternative financial channels to circumvent U.S.-led financial systems and sanctions. By holding digital assets, the U.S. expands its bargaining power beyond traditional fiat currency, providing an alternative layer of economic leverage. In times of tension or uncertainty, digital assets offer resilience against targeted economic disruptions, sanctions, and currency manipulation.

Moreover, each of the chosen digital assets brings distinct strategic advantages that enhance national security infrastructure. For example, XRP is renowned for its capability to execute rapid cross-border transactions with exceptional speed and minimal transaction costs. Such capabilities are integral during times of crisis requiring immediate international monetary settlements or aid distribution. Similarly, Solana’s high-performance blockchain provides robust support for scalable and secure applications such as secure communications infrastructure or real-time monitoring of critical national assets. Cardano, known for its serious approach to governance, transparency, and security, offers additional prospects for stability and reliability.

But second, here’s a fact that might be overlooked: The formation of this crypto reserve also carries profound implications for private digital asset markets. The recent federal endorsement will serve as a powerful catalyst for market confidence and institutional adoption. Although support for digital assets has already been growing, institutional investors and major financial institutions have still hesitated to engage fully with cryptocurrencies due to regulatory uncertainty and concerns over legitimacy. The launch of an official U.S. crypto reserve sends a powerful signal: These digital assets are not only legitimate, but also strategically valuable.

The strategic crypto reserve contrasts sharply with the alternative scenario of implementing a Central Bank Digital Currency (CBDC) where digital asset management is entirely centralized under government control. Unlike a CBDC, which could displace private banks and the market for stablecoins by monopolizing digital asset flows and potentially stifle innovation through excessive centralization, the strategic crypto reserve enables the government to collaborate alongside private entities, fostering a balanced, vibrant digital asset ecosystem. Other research has also found using cross-country data that CBDCs do little to help reduce inflation or productivity, but rather reduce financial well-being, particularly among vulnerable populations.

Crypto confidence

This alternative approach will help support the growth of the private market for digital assets. In particular, startups, as well as incumbent financial institutions, can more confidently invest in infrastructure, talent acquisition, and research initiatives knowing they have clear governmental alignment. Clear governmental participation in digital asset markets can streamline regulatory processes, ensuring private entities can innovate securely within well-defined legal boundaries. Countering malicious influences in crypto means bringing more transparency to the market.

The U.S. strategic crypto reserve is a sophisticated approach that addresses both geopolitical vulnerabilities and economic innovation simultaneously. By diversifying its strategic reserves into digital asset holdings, the nation strengthens its national security by broadening economic leverage and creating an alternative financial buffer against external interference. Federal involvement also helps legitimize and invigorate the private digital asset sector, creating conditions for exponential market growth and innovation. While any action necessarily creates new risks, these can and should be managed, but we shouldn’t overlook the potential upsides.

Read More
Christos Makridis Christos Makridis

Embracing FinTech: How CFPB Can Unlock the Future of Earned Wage Access

This article was originally published in Real Clear Markets.

The Consumer Financial Protection Bureau (CFPB) has occupied many headlines lately, but the change in leadership largely reflects a different approach to consumer empowerment than a departure in priorities. Among the many ways that the Trump Administration can improve on the status quo is the treatment of earned wage access (EWA) products by the Consumer Financial Protection Bureau (CFPB). EWA products allow employees to access a portion of their earned wages before payday, often for a small fee or free. The cost to employees is significantly lower than other options, including payday loans that often carry annual percentage rates (APRs) exceeding 300%. EWA fees typically range from $1 to $5 per transaction or are covered through alternative funding mechanisms like merchant interchange fees.

EWA providers do not charge interest, require collateral, or impose penalties for non-repayment. More importantly, because EWA draws on wages already earned, it does not create new debt obligations for workers. Some providers integrate directly with payroll systems, ensuring that any advance is automatically deducted from the employee’s next paycheck, eliminating default risk. This structure allows EWA fees to remain lower than traditional short-term credit options while offering a more transparent alternative to overdraft fees and high-cost lending.

Companies already serving consumer financial needs are well-positioned to expand into this space. Chime’s MyPay, for instance, enables consumers to access wages on their own schedule without hidden costs by connecting directly to payroll systems and leveraging merchant-funded models. Instead of employers taking the easy way out by pushing costs onto workers (i.e., “paying for their pay”), they can explore partnerships with FinTech providers and challenger banks to drive innovation in benefits delivery. This shift could not only lower costs, but also increase financial stability for employees who currently live paycheck to paycheck.

However, previous CFPB leadership made such FinTech partnerships tougher by classifying EWA programs as a type of consumer loan. That categorization imposed costly regulatory requirements under the Truth in Lending Act (TILA), treating EWA advances as if they were traditional credit products. TILA mandates extensive disclosures, compliance costs, and risk assessments that are unnecessary for a product that simply provides early access to wages. This regulatory burden raises the cost of providing EWA, forcing providers to either pass higher costs onto employees or exit the market altogether, reducing financial flexibility for workers.

With a new CFPB director expected to take a fresh look at these regulations, the opportunity exists to rethink the treatment of EWA in a way that balances consumer protection with financial innovation. There is no doubt that we need some regulations to set guardrails for markets, but the overarching concern is that we have witnessed a proliferation of regulations that do little to advance consumer safety, but instead generate unintended consequences, as my work with Alberto Rossi in 2020 has shown. Policymakers should focus on ensuring transparency and cost efficiency and allowing EWA providers to build models that eliminate fees for employees.

One such model leverages merchant interchange fees and employer partnerships to fund EWA services. When employees access their wages through an EWA-linked card, merchants pay a small fee—typically around 1%—which can be reinvested into funding wage advances. This creates a sustainable revenue stream without burdening workers with direct fees. Some fintech firms, like Chime’s MyPay, have already adopted this approach, offering free EWA services by integrating directly with payroll providers and employer benefits programs.

For employers, EWA programs also offer cost savings. Running payroll more frequently is expensive and administratively complex, and EWA provides a way to give employees financial flexibility without increasing payroll cycles. In turn, this reduces reliance on predatory payday lenders, which research has linked to higher bankruptcy rates among low-income workers.

While former director Rohit Chopra’s tenure at the CFPB has ended, the broader goal of improving financial access for workers remains. Regulatory compliance for the sake of it is empty, but fostering a financial ecosystem where innovation lowers costs for workers and expands economic opportunity should be the priority. Reclassifying EWA as something other than a loan is a first step in that process, but it reflects many more opportunities to modernize financial regulations in ways that enhance worker financial stability without stifling innovation.

Read More
Christos Makridis Christos Makridis

The secret weapon to fixing our broken immigration system is right in front of us

This article was originally published in Fox News (with Corey DeAngelis).

Twitter/X CEO Elon Musk and entrepreneur Vivek Ramaswamy sparked a debate in December when they advocated allowing more legal immigration for high-skilled workers – for example, through H-1B visas – to make America more competitive. President-elect Donald J. Trump endorsed the policy in a statement to the New York Post shortly after the dispute broke out.

Conservatives on both sides of this discussion should be able to agree on one thing: we would not need to import as much talent if we had a more effective education system.

The latest data from the National Assessment of Educational Progress, also known as the "nation’s report card," shows that fewer than one-in-four eighth grade students are proficient in math and less than a third of them are proficient in reading. The latest international assessment shows that we’re ranked 24th in math – in the middle of the pack – despite spending nearly $20,000 per public school student each year, more than just about any other country in the world.

U.S. 4th grade math scores have fallen 18 points since 2019 – a decline larger than all but three countries: Azerbaijan, Iran, and Kazakhstan.

We can start fixing the education crisis by improving the efficiency of educational resource allocation. Mountains of empirical evidence in economics research indicate that misallocation is one of the greatest impediments to economic growth for a nation, as well as the educational services sub-sector. To that end, improving the efficiency of public education can go a long way in producing multiplier effects for a nation as a whole.

Trump appointed both Musk and Ramaswamy to head the newly formed Department of Government Efficiency (DOGE) in November. In his statement announcing DOGE’s new leaders, Trump said his administration will "dismantle Government Bureaucracy, slash excess regulations, cut wasteful expenditures, and restructure Federal Agencies."

It’s no secret waste runs rampant through our public school system. The U.S. spends over $900 billion per year on education for lackluster results. The current system is not serving the students, and makes teachers’ lives more difficult, so now is the time to start thinking about how to get bigger bang for our buck in the Department of Education. We need to inventory where current resources are going, and what outcomes they’re driving – plain and simple.

But tackling this apparent low-hanging fruit can only do so much to cut waste. After all, about 90% of all public-school funding comes from state and local sources, not the federal government.

That’s why we have to understand the root cause behind the deteriorating student outcomes. A major potential factor is administrative bloat in American education. The latest data from the National Center for Education Statistics show that student enrollment has only increased by about 5% since 2000, but the number of teachers employed by the system has grown twice as fast as students, by about 10%, over the same period. School district administrative staff has increased by about 95%, or 19 times the rate of student enrollment growth.

We’ve increased inflation-adjusted spending per student by more than 160% since 1970 and the teachers aren’t seeing the money. Teacher salaries have only increased by 3% in real terms over the same period.

The problem is that the public school system operates as a monopoly with weaker incentives to spend money wisely. But public-school unions do have a strong incentive to advocate for hiring more people, particularly in states that do not have right-to-work laws. Additional staffing means more dues-paying members and a larger voting bloc.

Our just-released study provides the first evidence that unions are driving administrative bloat in education. Using data from the National Center for Education Statistics and the American Community Survey between 2006 and 2024, we find a robust positive relationship between union density and staff-to-student ratios, and negative effects of right-to-work laws (RTW) on these ratios. These effects are largely driven by the expansion of administrative and support roles rather than teachers. Furthermore, these effects are concentrated in non-RTW states.

Specifically, we find that a 10-point increase in teachers union density is associated with a one-point increase in year-to-year staffing growth.

In Chicago, a union stronghold, staffing has increased by a whopping 20% since 2019 even though student enrollment has plunged 10%. In Texas, one of six states that outlaws collective bargaining for public employees, staffing has increased by 8% – much closer to their 2% growth in student enrollment – over the same period. Our results in the study show that these examples are not anecdotal – it’s been happening at scale.

Injecting competition into the K-12 education system would put pressure on school districts to redirect otherwise wasteful spending into the classroom. Trump can help make this happen by getting congressional Republicans in-line to pass school choice. The Educational Choice for Children Act already passed out of the House Ways and Means Committee last September and President-elect Trump said he would sign it.

Improving the efficiency of government should be a non-partisan issue, especially in a sector that hits so close to home for every American – education. It’s now up to Congress to deliver for the parents who put them in office. Allowing parents to direct the upbringing of their children is the right thing to do, but it will also make America more competitive and make education great again.

Read More
Christos Makridis Christos Makridis

Making crypto mainstream requires greater efforts to stop fraud

This article was originally published in Cointelegraph.

We find it easy to talk about the benefits of the digital economy, whether the internet or digital assets, but the costs are often overlooked. Whether the surge in human trafficking that has emerged on social media platforms or the rise of cybersecurity vulnerabilities, the expansion of the digital economy comes with new risks to manage.

The digital asset community is no different and, to scale and become sustainable, it must confront the prevalence of fraud. And, it’s not hard: already distributed ledger technologies are demonstrating their value by solving concrete use-cases. This week in Vienna, Austria, the Austrian National Bank — together with the Complexity Science Hub and other sponsors — are hosting a conference on advances in financial technology, with a wide array of presenters who have researched value-enhancing uses of blockchain technology.

Thanks to pioneering work by the Federal Trade Commission’s Consumer Sentinel, we now have basic statistics on the incidence of fraud, the perpetrators, and the countries that exhibit the greatest violations. Using these data on complaints, Michel Grosz and Devesh Raval from the FTC show that it is possible to identify countries with excess levels of fraud based on their level of exports and to whom they are exporting. We need this caliber of data and the processes to support its collection to make strides in countering fraud.

Unfortunately, crypto does not have a great reputation on this frontier. The FTC released showing $114 million in reported fraud from Bitcoin ATMs (BTMs) in 2023 — and the number of crypto scams has surged in recent years. Of course, we need to view these statistics in perspective: fiat currencies remain the currency of choice for fraud across the world, so we should not compare the worst of crypto with the best of fiat – it’s not an apples to apples comparison. Nevertheless, we should still strive to establish the right incentives and processes within the digital asset ecosystem to counter fraud wherever possible.

Fortunately, there are already a wave of blockchain use-cases that are countering fraudulent activity. Consider, for instance, the role of financial auditing that helps ensure the integrity and transparency of organizations. Currently, auditors lack the ability to cross-check transactions between different organizations, a limitation that could lead to misreporting scandals involving millions of dollars and leads many crypto audits to be more for the show. To address this, new protocols leveraging blockchain, such as Cross Ledger cOnsistency with Smart Contracts (CLOSC) and Cross Ledger cOnsistency with Linear Combinations (CLOLC), are emerging that will enable auditors to verify cross-ledger transactions more efficiently with built-in privacy and security properties, such as transaction amount privacy and organization-auditor unlinkability.

Similarly, take scalability as another example, which is recognized as necessary for institutional adoption. Layer-2 (L2) solutions such as rollups help solve the scalability problem of L1s by handling transactions off the main blockchain and then posting the results back. However, a big concern is ensuring the security of these rollups, especially making sure that the data posted is accurate.

One recent study proposed a "watchtower" system where independent actors (watchtowers) are rewarded for keeping an eye on transactions and raising alarms when something seems wrong. These watchtowers are required to prove that they’ve been diligent in their work through a system called "proof of diligence," which ensures they’ve monitored the transactions properly. They can also challenge false data, and if they catch errors, they earn rewards. A key part of the solution is not just the technology, but also the economics of designing adequate incentives to prevent wrongdoing and promote trust.

Value-enhancing examples abound in the blockchain ecosystem, as the AFT conference in Vienna will showcase, but we need to do a better job of quantifying the benefits of real use-cases and amplifying the integral role that they play in enabling economic and social activity. Indeed, one of the greatest use-cases of blockchain technologies, drawing on its roots from cryptography, is the ability to improve security and counter malicious actors. But we need to get more serious in the way we talk about and pitch blockchain as a solution.

Christos Makridis is a guest columnist for Cointelegraph, an associate research professor at Arizona State University, an adjunct associate professor at University of Nicosia and the founder/CEO of Dainamic Banking. He holds doctoral degrees in economics and management science and engineering from Stanford University.

Read More
Christos Makridis Christos Makridis

If your country has adopted a CBDC, you might be suffering

This article was originally published in Cointelegraph.

We’re often told that central bank digital currencies (CBDCs) will promote "financial inclusion" and help people around the globe. However, preliminary research results indicate the opposite could be true: Where CBDCs have been adopted, well-being has declined in recent years — particularly among young people and those with low incomes.

My new research paper provides the first comprehensive evaluation of their early effects on macroeconomic indicators and subjective well-being, utilizing cross-country data between 2019 and 2023. The results suggest that the benefits may be more limited than initially anticipated, coupled with potential negative effects on individual well-being and financial stability.

Limited economic benefits and unintended consequences

Data from the World Bank indicates — contrary to what you may think — higher-income countries are more likely to pilot or launch CBDCs, with these countries having, on average, five percentage points higher per capita GDP. While these countries also tend to have larger populations — largely driven by China and India — there are no significant differences in net migration rates, male unemployment rates, or urban populations.

Despite the enthusiasm surrounding CBDCs, the analysis suggested that their impact on key economic indicators — such as GDP growth and inflation — has been minimal. The study's statistical models compared countries that either piloted or launched CBDCs between 2019-23.

Recognizing that countries that pilot or launch CBDCs may be systematically different from their counterparts, I also created a "synthetic control" group that matched countries with CBDCs with others based on a nonlinear combination of controls. In other words, while there was no single control country, a combination of characteristics over each country allowed for the construction of a "synthetic control." Where possible, data was used to find how measurements within countries had changed after CBDC adoption.

The study found no evidence that CBDCs correlated with greater GDP per capita or lower inflation. These findings challenge the prevailing narrative that CBDCs are a panacea for economic challenges, particularly in low- and middle-income countries.

However, macroeconomic indicators only go so far, especially in developing countries where the data might be less reliable. Gallup and its World Poll — which is the leading source of data for constructing measures of subjective well-being across countries over time — provided the data for two additional outcomes of interest: whether an individual was thriving and their financial well-being. The former is measured based on responses to questions relating to a self-assessed ranking of current life satisfaction and expected (over the next five years) life satisfaction both on a 0-10 scale. Financial well-being is measured in response to several self-assessed questions about the ease of paying the bills and financial anxiety.

Related: How will CBDCs be used for political oppression in your country?

Gallup's data indicated that CBDCs negatively correlated with both the probability an individual was thriving and their financial well-being — a result that was concentrated among younger, lower-income populations. These groups, who are often the target audience for financial inclusion initiatives, report feeling less financially secure.

After estimating these statistical models relating well-being with CBDC adoption, country controls, and individual demographics, the data identified where the declines in well-being have been the greatest. The CBDC-interested countries with the largest declines between 2020-23 — in terms of respondents who were "thriving," according to the Gallup World Poll — were South Africa, Sweden, Thailand, and South Korea. ​(Sweden and South Korea ​have announced ​pilot CBDC programs, ​w​hile South Africa and Thailand started developing their CBDC​s in​ the first quarter of 2024.​)

The ​importance of ​design and ​regulation

One of the critical challenges facing central banks is designing CBDCs that maximize benefits while minimizing risks. The risks associated with CBDCs are not trivial. They include potential financial instability through the disintermediation of banks, the erosion of privacy, and the concentration of financial power, which I’ve written about in Cointelegraph before. These risks are particularly pronounced if the central bank directly manages all aspects of the CBDC, which could undermine the traditional role of commercial banks and reduce the availability of credit, as Jesús Fernández-Villaverde and his coauthors showed in a 2021 paper.

Hybrid CBDC models could reduce some of these risks by allowing private-sector intermediaries to interact with customers while a central bank oversees the system, preserving a role for commercial banks and ensuring that CBDCs complement rather than disrupt existing financial systems. Additionally, implementing strong privacy protections and limiting the centralization of power are essential to prevent the potential misuse of CBDCs. That is in stark contrast to the way that some countries have implemented CBDCs, particularly China. However, further work is needed to assess how the architecture of the CBDC affects both economic and social outcomes — not just in theory, but very concretely.

Read More
Christos Makridis Christos Makridis

Remote work does make more time for work-life balance. Here’s the data

This article was originally published in Fast Company.

There’s no debate that remote work is here to stay, but the question is what shape it takes and how different types of workers are changing the amount of time they work. My new research shows that remote workers have reduced their time at work and increased their time in leisure substantially between 2019 to 2023, and these trends continued into 2023. 

How to measure time spent working 

To explore the effects of this shift, I used data from the American Time Use Survey (ATUS), restricted to employed, full-time respondents aged 25 to 65, providing a comprehensive measurement of time allocated to various activities with minimal measurement error.

This data set offers more reliable insights than other labor supply surveys, such as the Current Population Survey or the American Community Survey, due to reduced recall bias. Furthermore, the analysis focused on non-self-employed workers, as their occupational classification is clearer. 

One of the major benefits of the ATUS is that it measures a wide array of activities, not just time at work, like many existing surveys, allowing me to track time across work, leisure, household chores, childcare, and more. Leisure is defined as time spent socializing, passive and active leisure, volunteering, pet care, and gardening. 

Subscribe to the Compass Newsletter.Fast Company's trending stories delivered to you daily

SIGN UP

Privacy Policy

|

Fast Company Newsletters

Since the ATUS collects detailed 24-hour time diaries in which respondents report all the activities from the previous day in time intervals, the records are also more reliable than standard measures of time allocated to work available in other federal data sets that require respondents to recall how much they worked over the previous year or week. These diaries contain much less noise than typical survey results. 

To measure remote work, I used the “remotability” index from professors Jonathan Dingel and Brent Neiman’s 2020 paper in the Journal of Public Economics, which is based on the Department of Labor’s O*NET task-level data on how many tasks in an occupation can be done remotely. 

Updated 2023 time use patterns 

My recently updated research compares how people have changed their time allocation between 2019 and 2023 when they work in remote jobs versus hybrid or on-site.

While remote workers tend to spend 28 minutes more per day than their non-remote counterparts in 2019, remote workers drastically reduced their time at work over the pandemic: 32 minutes per day less in 2020, 41 minutes less in 2021, 57 minutes less in 2022, and 35 minutes less in 2023. Conversely, workers in more remote jobs allocated more time toward leisure over these years. 

Do these changes in time use simply reflect differences in the type of worker within remote jobs over time? All my results control for demographic factors, such as age and education, hourly wages, and differences across industries and occupations. I also studied changes in the composition of more versus less remote jobs over these years, finding minimal differences. 

Interestingly, the time spent on home production (household chores, caring for household members) and shopping did not increase significantly. What did, however, is a slight increase in other activities not otherwise classified. Overall, remote workers are not merely reallocating their work time to household activities but are genuinely engaging in more leisure. 

Labor and leisure changes varied across different demographic groups, and these trends diverged in some cases in 2023. In particular, men reduced their time at work by 33 minutes per day in 2021 and 58 minutes per day in 2022, relative to 2019, but only 13 minutes per day in 2023.

In contrast, women reduced their time at work over these years, but that intensified in 2023 when they reduced their time at work even more by 76 minutes per day, relative to 2019. Singles and those without children also showed a steeper decline in labor and an increase in leisure activities, though the effects were less significant in 2023. 

Crucially, these declines in time at work are not driven by the role of commuting. While commute times have declined overall, the bulk of the decline in time at work has been driven by actual time working—not commuting.  
These results provide insights into the ongoing debate about quiet quitting, where employees may be dissatisfied with their jobs and reduce their efforts. While the data does not directly measure job satisfaction, ATUS includes some information on subjective well-being. Remote workers reported slightly higher life satisfaction and felt more rested than their in-office counterparts in 2021, suggesting they are not necessarily quiet quitting. Instead, they might be reallocating their time to activities they prefer, enabled by the flexibility of remote work. 

Understanding why 

Why such a large change in how much men and women work in 2023? Women who typically work in remote intensive jobs work 46 minutes per day more than their less remote counterparts, consistent with my writing last year that did not yet include 2023 data. However, the decline in time at work accelerated more in 2023 for women in remote jobs. 

One possibility is that more women have reported burnout than men, although they decreased their time at work in 2022. Another possibility is that there has been some switching back to in-person work, which may be harder to do for some women who have more caregiving responsibilities, particularly in light of prior research of mine with Chris Herbst and Ali Umair documenting the significant effect that state regulations had on the availability of childcare during the pandemic. In this sense, men in more remote intensive jobs may have returned to work more in 2023 with their hybrid jobs, whereas women did not. 

I find that time spent in childcare for women with children grows by 28 minutes per day in 2022, but only 15 minutes per day in 2023, relative to 2019. While the same patterns do not exist for men, and are part of the explanation, it is not the full story. 

Could preferences over in-person versus remote work explain the differences in time use? In companion work, I found that men dislike full working-from-home arrangements, whereas I do not see the same for women.

The data also suggests a preference toward remote work among women, so if these jobs have fewer opportunities and/or requirements around work, then the gradual return to office would result in fewer hours.

Coupled with all of these patterns are findings from Gallup surveys, which show us that only 33% of employees are engaged, so burnout—or low engagement— may be a very real phenomenon. 

What are the consequences on productivity? Time will tell, but the positive effects of greater flexibility may have offset the negative effect of lower labor. However, much more work, including understanding differences in time use among all genders, still has to be done. 

Read More
Christos Makridis Christos Makridis

Moving Slow and Fixing Things

This article was originally published on Lawfare with Iain Nash, Scott J. Shackelford, Hannibal Travis.

Silicon Valley, and the U.S. tech sector more broadly, has changed the world in part by embracing a “move fast and break things” mentality that Mark Zuckerberg popularized but that pervaded the industry long before he founded FaceMash in his Harvard dorm room. Consider that Microsoft introduced “Patch Tuesday” in 2003, which began a monthly process of updating buggy code that has continued for more than 20 years. 

While it is true that the tech sector has attempted to break with such a reactive and flippant response to security concerns, cyberattacks continue at an alarming rate. In fact, given the rapid evolution of artificial intelligence (AI), ransomware is getting easier to launch and is impacting more victims than ever before. According to reporting from The Hill, criminals stole more than $1 billion from U.S. organizations in 2023, which is the highest amount on record and represents a 70 percent increase in the number of victims over 2022.

As a result, there are growing calls from regulators around the world to change the risk equation. An example is the 2023 U.S. National Cybersecurity Strategy, which argues that “[w]e must hold the stewards of our data accountable for the protection of personal data; drive the development of more secure connected devices; and reshape laws that govern liability for data losses and harm caused by cybersecurity errors, software vulnerabilities, and other risks created by software and digital technologies.” This sentiment represents nothing less than a repudiation of the “Patch Tuesday” mentality and with it the willingness to put the onus on end users for the cybersecurity failings of software vendors. The Biden administration, instead, is promoting a view that shifts “liability onto those entities that fail to take reasonable precautions to secure their software.”

What exact form such liability should take is up for debate. Products liability law and the defect model is one clear option, and courts across the United States have already been applying it using both strict liability and risk utility framings in a variety of cases including litigation related to accidents involving Teslas. In considering this idea, here we argue that it is important to learn from the European Union context, which has long been a global leader in tech governance even at the risk of harming innovation. Most recently, the EU has agreed to reform its Product Liability Directive to include software. When combined with other developments, we are seeing a new liability regime crystallize that incorporates accountability, transparency, and secure-by-design concepts. This new regime has major implications both for U.S. firms operating in Europe and for U.S. policymakers charting a road ahead.

The EU’s various levers to shape software liability, and more broadly the privacy and cybersecurity landscape, are instructive in a number of ways in helping to chart possible paths ahead, and each is deserving of regime effectiveness research to gauge their respective utility. These include:

  1. Extending Products Liability to Include Cybersecurity Failings: Following the EU’s lead in expanding the definition of “product” to include software and its updates, U.S. policymakers could explore extending traditional products liability to cover losses due to cybersecurity breaches. This would align incentives for businesses to maintain robust cybersecurity practices and offer clearer legal recourse for consumers affected by such failings.

  1. Adopting a “Secure by Design” Approach: New EU legislation, such as the Cyber Resilience Act, mandates that products be secure from the outset. U.S. policy could benefit from similar regulations that require cybersecurity to be an integral part of the design process for all digital products. This would shift some responsibility away from end users to manufacturers, promoting a proactive rather than reactive approach to cybersecurity.

  1. Enhancing Transparency and Accountability Through Regulatory Frameworks: Inspired by the EU’s comprehensive regulatory measures like the General Data Protection Regulation (GDPR) and the AI Act discussed below, the U.S. could benefit from creating or strengthening frameworks that enforce transparency and accountability in data handling and cybersecurity. Building on the recent guidance from the U.S. Securities and Exchange Commission that requires publicly traded companies to report material cybersecurity incidents within four days, this could include potential requirements for risk assessments, incident disclosures, and a systematic approach to managing cyber risks across all sectors, not just critical infrastructure.

 Each of these themes is explored in turn.

Extending Products Liability to Include Cybersecurity Failings

The EU has taken a more detailed, and broader, approach to imposing liability on software developers than what has commonly been argued for in the U.S. context. 

In a recognition that many products, from toasters to cars, have gotten increasingly “smart,” the EU began a process in 2022 to update its products liability regime, which had been in place and largely unchanged since 1985. As such, reforms agreed to under the Product Liability Framework include an expansion of what’s considered a “product” to cover not just hardware, but also stand-alone software such as firmware, applications, and computer programs along with AI systems. Exceptions are applicable for certain free and open-source software, which has long been an area of concern for proponents of more robust software liability regimes.

Relatedly, the concept of “defect” has been expanded to include cybersecurity vulnerabilities, including a failure to patch. The notion of what constitutes “reasonable” cybersecurity in this context, such as a product that does not provide the expected level of service, builds on other EU acts and directives, discussed below.

Recovered damages have also broadened to include the destruction or corruption of data, along with mental health impacts following a breach. Covered businesses can also include internet platforms with the intent being that there is always an “EU-based business that can be held liable.” Even resellers who substantially modify products and put them back into the stream of commerce may be held liable. It’s now also easier for Europeans to prove their claims through the introduction of a more robust U.S.-style discovery process and class actions, along with easing the burden of proof on claimants and extending the covered period from 10 to 25 years in some cases.

Although the EU has long been a global leader on data governance and products liability, the same has not necessarily been the case for cybersecurity—particularly pertaining to critical infrastructure protection. In 2016, the EU worked to change that through the introduction of the Network Information Security (NIS) Directive, which was updated in 2023 as NIS2.

Among other things, NIS2 expanded the scope of coverage to new “essential” and “important” sectors including cloud and digital marketplace providers, required EU member states to designate Computer Security Incident Response Teams (CSIRTs) and join Cooperation Groups, which are in essence international information sharing and analysis centers, or ISACs. Covered businesses must take “appropriate” steps to safeguard their networks, secure their supply chains, and notify national authorities in the event of a breach.

In sum, NIS2 regulates software in a manner more familiar in the U.S. context, relying on information sharing and a risk management approach to standardize common activities like incident reporting.

Further, the European Union’s Cybersecurity Act, which took effect in June 2019, establishes a comprehensive framework for the certification of cybersecurity across information and communications technology products, services, and processes. The regulation aims to bolster trust in the digital market by ensuring that these entities adhere to standardized cybersecurity criteria. This certification scheme is voluntary, but it affects manufacturers and service providers by enabling them to demonstrate their compliance with high levels of cybersecurity, thereby enhancing market perception and consumer trust in their offerings. The act fits within the broader EU strategy of leveraging regulatory measures over direct state control, epitomized by the role of European Union Agency for Cybersecurity (ENISA). ENISA has become a major entity in shaping and supporting the cybersecurity landscape across the EU, despite facing challenges in establishing its authority and influence.

From a products liability perspective, the Cybersecurity Act shifts the landscape by integrating cybersecurity into the core criteria for product safety and performance evaluations. By adhering to established certification standards, companies not only mitigate the risks of cyber threats but also reduce potential legal liabilities associated with cybersecurity failures. The act encourages transparency and accountability in cybersecurity practices, pushing companies to proactively manage and disclose cyber risks, which can influence their liability in cases of cyber breaches.

This approach aligns with the EU’s broader regulatory security state model, which emphasizes governance through regulation and expertise rather than through direct governmental intervention. This model is characterized by the deployment of indirect regulatory tools and reliance on the expertise and performance of various stakeholders to manage security issues, rather than solely depending on direct state power and authority. The voluntary standards have posed challenges, leading to uneven adoption and vulnerabilities in products not compliant with these standards and minimum security objectives for organizations. Nevertheless, some studies have commented that at least the act has helped the European Union behave in a coordinated way.

Adopting a “Secure by Design” Approach

In addition to the proposal to include software within the scope of products liability legislation, the EU has introduced unified cybersecurity requirements for products sold within the common market, which includes pure software products. The Cyber Resilience Act (CRA), a forthcoming EU regulation, combines detailed cybersecurity requirements, such as patch management and secure-by-design principles, with a comprehensive liability regime. The CRA can be considered as more comprehensive than California’s “Internet of Things” (IoT) security law as the CRA’s cybersecurity requirements go far beyond California’s reasonable security features and password requirements, and the CRA applies to both IoT and software products.

Fundamentally, the CRA requires that products be introduced to the market with all known vulnerabilities patched and that they have been developed under a “secure by design” basis. However, developers are also required to conduct and maintain a cybersecurity risk assessment, provide a software bill of materials listing out the third-party software components used in their products, and ensure security updates are available for a period of at least five years. Developers and manufacturers of ordinary products can self-certify conformity with the legislation while “important” and “critical” products will require a more in-depth and an independent conformity assessment, respectively.

Noncompliance with the CRA follows the model used in the GDPR and can result in a fine of up to 15 million euros or 2.5 percent of total revenue (whichever is larger) for breaches of core requirements, while other breaches can result in a fine of up to 10 million euros or 2 percent of total revenue. However, there is no mechanism under the act for a complainant to enforce the CRA directly, and complainants must petition their local regulator if they believe the requirements have not been met.

Enhancing Transparency and Accountability Through Regulatory Frameworks

The EU’s AI Act introduces a regulatory framework to protect users from harms caused by the failure of an AI system in the name of safety and transparency. The act classifies AI systems into three categories—prohibited, high-risk, and non-high-risk—and is reminiscent of the CRA in its comprehensive scope. Prohibited applications, such as those involving subliminal techniques or social scoring, are banned within the EU. High-risk applications, which include medical devices and credit scoring systems, must adhere to stringent requirements, including maintaining a risk management system, ensuring human oversight, and registering in the EU’s database of high-risk AI systems. Non-high-risk applications face minimal to no regulatory obligations.

The act also addresses general purpose AI models, like foundation and large language models, imposing obligations similar to those for high-risk systems. These include maintaining a copyright policy and publishing a summary of the training data. Enforcement is managed by domestic regulators and coordinated at the EU level by the newly established European Artificial Intelligence Board and the European Office for AI, where complaints can also be lodged against noncompliant AI providers.

There are penalties for noncompliance. Violations involving prohibited AI can result in fines up to 30.3 million euros or 7 percent of total revenue. High-risk AI breaches may lead to fines of up to 15.14 million euros or 3 percent of total revenue, and providing misleading information to regulators can attract fines up to 7.5 million euros or 1.5 of total revenue. The applicable fine, higher or lower, depends on whether the entity is a large corporation or a small to medium-sized enterprise. One of the major limitations in the EU’s AI liability regime, however, exists in its broad categorization of risk. In reality, there are many different dimensions of risk, let alone the definition of fairness in AI systems. In particular, “explainability” and “interpretability” of AI systems are often used interchangeably, and that language will make it difficult to enforce and promote trustworthy AI practices.

In the event that a user is harmed following their use of a high-risk AI system, they will be able to benefit from a proposed companion directive, which introduces additional civil liability requirements for AI systems. Under the proposed directive, the user will be able to seek a court order compelling the provider of the AI system to disclose relevant evidence relating to the suspected harm.

However, the claimant will be required to demonstrate to the relevant court that the provider has failed to comply with its obligations under the AI Act in order for their claim to succeed. Harm that occurs to the claimant despite the provider meeting its obligations under the AI Act is not recoverable under this legislation.

This approach, as is the case with data privacy in the EU context, is far more comprehensive than the Biden administration’s AI executive order and sets out accountability and transparency rules that are already shaping global AI governance.

As with the AI Act, the General Data Protection Regulation is a comprehensive data protection law. It came into effect in the European Union on May 25, 2018, aiming to empower individuals with sovereignty over their personal data and simplify the regulatory environment for business. In particular, the GDPR requires that companies that process personal data be accountable for handling it securely and responsibly. This includes ensuring that data processing is lawful, fair, transparent, and limited to the purposes for which it was collected. Product and service providers must disclose their data processing practices and seek explicit consent from users in many cases, making them directly liable for noncompliance. The GDPR also gives individuals the option of demanding that a company delete their personal data or transfer it to another provider.

Although there are penalties for noncompliance for both primary data controllers and potential third parties, it has been very difficult to enforce and prove liability. For example, the European Union’s own internal analysis has explained how international data cooperation has been challenging due to factors like “lack of practice, shortcomings in the legal framework, and problems in producing evidence.” Furthermore, since consumers often are searching for specific information and do not have other options, they simply consent to the relevant disclaimers on a site to enter and never think twice about the data that was shared and/or the possibility of filing a lawsuit against a company for potential damages from, say, a data breach.

Furthermore, empirical studies generally point toward a negative effect of the GDPR on economic activity and innovation. Some studies have found that the GDPR led to a decline in new venture funding and new ventures, particularly in more data-intensive and business-to-consumer sectors. Others found that companies exposed to the GDPR incurred an 8 percent reduction in profits and a 2 percent decrease in sales, concentrated particularly among small and medium-sized enterprises. There is additional evidence that the GDPR led to a 15 percent decline in web traffic and a decrease in engagement rates on websites.

Finally, the Digital Services Act (DSA) “regulates online intermediaries and platforms such as marketplaces, social networks, content-sharing platforms, app stores, and online travel and accommodation platforms.” It took effect in a staggered process in 2022 and promised risk reduction, democratic oversight, and improvement of online rights. Articles 6(1), 9(1), and 22 of the DSA could be significant after cyberattacks, while Articles 17 through 21 could be crucial protections for users of online platforms whose accounts are suspended or terminated due to intrusions or misuse attributable to cyber threats. Article 9(1) obliges certain platforms to remove illegal material upon being served with notice of specific items by “judicial or administrative authorities.” Regarding online dangers other than intellectual property infringement and incitement to violence, Recital 12 of the DSA references “stalking” and “the unlawful non-consensual sharing of private images.”

In the United States, the law on loss of access to online accounts remains a patchwork, even in cases involving data breaches covered by federal statutes. While some courts allow breach of express or implied contract as a theory of recovery, others may not, and arbitration clauses are a formidable challenge in some cases. Articles 20(4) and 21 of the DSA strengthen the right to use online platforms and not to suffer arbitrary deprivation of access.

Settlements of class actions like those involving iPhone battery life and Google Chrome incognito mode do suggest that defective software and misleading marketing of technology claims have traction in U.S. courts without further reforms. Products liability and data security litigation remains viable due to the similarity of many U.S. states’ laws and the intention of the federal class-action procedure to make asserting small-dollar claims economical.

Lessons for Policymakers

A natural question is whether Europe has taken a more active regulatory approach because its technology sector is much smaller. While having a smaller technology sector in Europe inevitably means that there are different political economy dynamics, including lower returns to lobbying, there is nonetheless a growing recognition that the absence of clearer guidelines and regulations is a lose-lose situation in the long run. For instance, a voluminous body of empirical literature documents a rise in concentration and market power, particularly among digital intermediaries, and that could be attributed to lax and ambiguous guidelines. Only recently has the U.S. Securities and Exchange Commission introduced guidance requiring that public companies report data breaches four business days after the incident is determined material.

The EU’s efforts to extend products liability law to software, adopt a secure by design approach similar to that called for in the 2023 U.S. National Cybersecurity Strategy, and enhance transparency and accountability across the digital ecosystem have solidified its place as a global leader in tech governance.

Several of these steps could be taken at once perhaps as part of the proposed American Privacy Rights Act, which would offer enhanced powers to the Federal Trade Commission to investigate deceptive or defective products, and establish baseline privacy and cybersecurity expectations for American consumers.

At the highest level, if a products liability approach in the U.S. context is to be successful, Congress would need to introduce a package of reforms that would address various barriers to recovery, including the economic loss doctrine and the enforcement of liability waivers. Moreover, the array of EU initiatives surveyed above still give rise to uncertainty, such as a potential cap of 70 million euros on all claims for a specific defective item. And costs should not be underestimated—one U.S. House of Representatives Oversight and Accountability Committee report claimed 20.4 to 46.4 billion euros in new compliance and operation costs introduced by the DSA and the GDPR. Still, such estimates should be weighed against the staggering economic harm introduced by software vulnerabilities discussed above.

A best-case scenario would be for policymakers on both sides of the Atlantic, and beyond, to come together and find common ground to encourage the convergence of baseline software security expectations. This process could either be kicked off through a special event, such as a Global Responsible Software Summit modeled after recent ransomware and democracy summits, or be added to an upcoming major gathering.

No nation is an island in cyberspace, no matter how much some wish they were. How leading cyber powers—including the EU and the U.S.—approach the issue of software liability will make worldwide ripples, which, depending on how these interventions are crafted, could turn into a tsunami.

Read More
Christos Makridis Christos Makridis

Bolstered by Faith

This article was originally published in City Journal.

The Covid-19 pandemic challenged much more than health-care systems—it also tested communities’ resilience, particularly their ability to handle economic shocks. While we often look to fiscal policy or other government actions to explain localities’ economic outcomes, an overlooked factor plays a significant role: religiosity.

Baylor University professor Byron Johnson and I researched whether communities with higher levels of religiosity before the pandemic fared better economically during the crisis. Comparing data from the Quarterly Census of Employment and Wages between 2019 and 2023 with religiosity levels from the Religion Census in 2010 and 2020, we found that communities where religious adherence had grown over the decade showed notably better employment and business-establishment trends during the pandemic years.

Why would religiosity matter so much in times of economic hardship? Religious communities often act as social safety nets, creating strong networks of moral and practical support. The solidarity this creates can help cushion the blow during economic downturns. Faithful communities often rally in hard times, providing assistance to those in need, from food banks to job networking. Their collective resilience supports not just individual believers but also the broader community infrastructure.

This truth doesn’t just hold for traditionally religious communities. Any community with strong social bonds and a shared sense of purpose, whether based around religious faith or other collective beliefs and practices, can show similar resolve. Weathering a crisis is about more than beliefs—it’s about the networks and mutual support those beliefs foster.

Our study suggests that shared faith and participation contribute significantly to local economies, helping them bounce back faster and stronger. These findings are particularly relevant for policymakers and community leaders. Investing in community-building efforts and supporting faith-based and other local organizations can be an effective strategy to develop economic resilience. This approach can also serve as preparation for managing future crises.

The interplay between religious observance and economic resilience highlights the need for a broader understanding of what strengthens communities and makes them capable of collective response. In a world where economic shocks are inevitable, recognizing and bolstering these networks could be essential to enduring future challenges.

Read More
Christos Makridis Christos Makridis

The Silent Office

This article was originally published in City Journal.

The increasing politicization of American life has profound implications not only for social harmony but also for markets and the workplace. Polarization is changing office dynamics, as workers find it increasingly difficult to navigate religious and political issues on the job.

My recent article in the Journal of Economics, Management and Religion presents the results of a new, nationally representative Ipsos survey. The data illustrate a concerning trend: a large share of the American workforce is reticent to express personal views on social and political issues on the job, fearing repercussions that could stymie their career advancement. The survey found that roughly 42 percent of employees have withheld their opinions to protect their professional future, reflecting Americans’ deep-seated fear that self-expression could put their jobs in jeopardy.

That anxiety may be tied to the surging number of instances of religious and political discrimination. Indeed, the Employment and Opportunity Commission reported a 73 percent rise in religious-based discrimination charges between 1992 and 2020. The reality behind that statistic—a potentially substantial increase in unjust discrimination against religious employees—may have contributed to workers’ growing unease.

The effects of discrimination and self-censorship in the workplace bleed into the labor market. Would-be employees are unlikely to want to work for an intolerant employer. Some 40 percent of survey respondents, for example, indicated that they are less likely to apply to a company they perceive as being hostile to their beliefs. Such perceptions can affect productivity, too, making current employees less loyal to their employers and potentially lowering worker engagement. The broader trend of workers being willing to relocate for jobs that more closely align with their moral and political compass—often for significantly lower pay—underscores a desire for personal integrity, and the extent to which many feel that their current workplace stifles this aim.

Widening polarization also affects consumer behavior. Many Americans are willing to change their consumption habits based on brands’ political and social stances. Fifty-six percent of survey respondents said that they are likely to cease purchasing from brands that oppose their values, with 30 percent claiming already to have done so. This indicates buyers’ desire to support companies with similar values and demonstrates the deep connection between political and social identity and consumer loyalty.

While companies may feel pressured to take stands on issues, the data suggest that companies should be wary, or at least aware, that many consumers will switch to competitors in response to political posturing. These results challenge companies to focus on delivering value. Businesses need to find a balance that respects diverse viewpoints without compromising their principles.

Read More
Christos Makridis Christos Makridis

Asking Too Much

This article was originally published in City Journal.

In 2023, the Consumer Financial Protection Bureau (CFPB) issued a final rule to implement Section 1071 of the 2010 Dodd–Frank financial regulation law, aimed at fostering transparency and fairness in lending by mandating the collection of information about the race and sexual orientation of small-business loan applicants. Though well-intentioned, the move has sparked a debate on privacy, data security, and operational challenges for financial institutions.

I recently released a working paper fielding a nationally representative survey of 2,996 respondents that documented a pronounced reluctance among business owners to share sensitive personal information with lenders. This hesitancy not only challenges the objectives of Section 1071 but also raises questions about tensions between regulatory goals and individuals’ privacy concerns. My study highlights four key findings.

Reluctance to Share Sensitive Information. A substantial portion of respondents expressed discomfort with sharing personal demographic information with financial institutions, with 65 percent of participants either strongly opposed or somewhat opposed to sharing racial information, and an even higher percentage, 77.1 percent, opposed to sharing their sexual orientation. This resistance underscores a broader concern about privacy and data.

Demographic Variations in Comfort Levels. Older respondents and those with some college education were more likely to express sensitivity about sharing personal information. Interestingly, while majorities across all demographic groups were uncomfortable with revealing such information, males, married individuals, and those identifying as black, Asian, or conservative were comparatively less concerned: 57 percent of black and 55 percent of Asian respondents, compared with 68 percent of Hispanics, reported being uncomfortable divulging racial information.

Impact on Banking Preferences. Individuals hesitant to share their race or sexual orientation were 5–7 percentage points less likely to approve of banks having additional objectives such as promoting environmental sustainability or targeting specific groups for lending. This suggests a link between privacy concerns and a preference for banks to focus on traditional banking concerns.

Effects of Information Treatments. Since many consumers are unaware of how firms and third parties use their data, I also experimented with an information experiment in which respondents were shown prompts about the effects of data breaches. Respondents who saw a prompt about the 2021 leak of user data from the online trading platform Robinhood were 5 percentage points more likely to prefer not to share racial information with banks. To put that in perspective, the proportion of people who do not want to share racial information is already 65 percent, meaning that further prompting of people for information makes them even less willing to share.

The findings underscore a critical dichotomy: while Section 1071 of Dodd-Frank seeks to illuminate and address disparities in access to credit, it inadvertently stokes fears of data misuse and breaches among borrowers. This tension is not without consequence. Financial institutions, tasked with implementing the CFPB’s new rule, face the dual challenge of complying with regulatory requirements and addressing borrowers’ fears. The study highlights the need for a nuanced approach to data collection–one that respects borrower privacy, while striving for the transparency and fairness that Section 1071 aims to achieve. My research calls for reevaluating how regulatory objectives are pursued and suggests the need for a framework that recognizes the legitimate concerns of borrowers, the very people whom the regulation was established to help.

Read More
Christos Makridis Christos Makridis

Housing market data suggests the most optimistic buyers during the pandemic are more likely to stop paying their mortgages

This article was originally published in Fortune Magazine with William Larson.

Traditional methods for forecasting housing prices and broader economic indicators are proving insufficient. In our recent research, we explored an overlooked aspect of home buying: the significance of buyers’ expectations. We found that the anticipations of mortgage borrowers regarding future housing prices are crucial for understanding the health of the economy.

There’s a consensus that the expectations about future increases in housing prices and interest rates significantly influence housing market dynamics. The logic is straightforward: If individuals believe the value of homes will rise, they are more inclined to take on more debt. This effect is amplified in the housing market because you cannot bet against market downturns, making the positive outlooks of buyers more influential. Previous studies have indicated that this optimism can drive rapid increases in housing prices, creating “bubbles.” These bubbles often lead to inflated house prices, fueled by speculation.

What occurs, however, when housing prices remain elevated but expectations begin to decline?

Our findings indicate that expectations are critical in the decision-making processes of mortgage borrowers. During the COVID-19 pandemic, there was a period when confidence in future housing price increases waned, despite actual prices still rising.

We observed that borrowers who were initially the most optimistic about price increases were significantly more likely to request mortgage forbearance–a pause or reduction in payments–by about 50% more than the broader mortgage-borrowing population (6% versus 4% in our study) during this episode. This underscores the significant impact of borrower expectations on the housing market and economic stability.

Expectations trump reality

We began our research with data from the Federal Housing Finance Agency, specifically the National Mortgage Database, and noticed something intriguing: Before 2020, people who were positive about the future increase in house prices were more likely to pause their mortgage payments early in the COVID-19 pandemic, despite the fact that house prices were still going up. This observation led us to understand that these borrowers were reacting more to their expectations about the future than to the actual market conditions at the time. When their outlook on house prices temporarily worsened, they opted for forbearance. However, as their optimism returned towards the end of 2020 and throughout the pandemic, these same borrowers began resuming their mortgage payments.

This pattern underscores how crucial expectations are in shaping how borrowers act, which, in turn, has significant effects on the broader economy. After our study period, which ended in 2022, expectations dropped substantially heading into 2023. Our findings suggest that the wave of optimistic borrowers between 2021 and mid-2022 may be particularly vulnerable to such drops in expectations if paired with negative equity or job loss. Thankfully for the mortgage market, the economy–and house prices–remained strong throughout this most recent episode of falling expectations.

Our research serves as a warning to those involved in housing policy and finance: It's essential to consider what borrowers are thinking and expecting, not just the usual financial indicators like interest rates, monthly payments, or how much debt they're taking on compared to the value of their home.

Understanding people's expectations is tricky–they're hard to measure and introduce a challenge known as adverse selection, where borrowers have more information about their ability to pay back loans than the lenders or investors do. Discovering that something not typically tracked by mortgage investors, like borrower expectations, can have a big impact on whether loans are paid as agreed is striking and warrants more attention.

For those regulating and monitoring the housing market, grasping the relationship between what people expect and what's actually happening can lead to better forecasts and smarter policymaking.

Christos A. Makridis, Ph.D., is an associate research professor at Arizona State University, the University of Nicosia, and the founder and CEO of Dainamic Banking.

William D. Larson, Ph.D., is a senior researcher in the U.S. Treasury’s Office of Financial Research, and a non-resident fellow at the George Washington University’s Center for Economic Research. This research was conducted while Larson was a senior economist at the Federal Housing Finance Agency (FHFA). The views presented here are those of the authors alone and not of the U.S. Treasury, FHFA, or the U.S. Government.

Read More
Christos Makridis Christos Makridis

Why Solana will prevail despite Ethereum ETFs

This article was originally published on Cointelegraph (with Connor O’Shea).

The cryptocurrency world is abuzz with bullish sentiment thanks to Bitcoin spot ETFs. Investors have been quick to accept that Ether spot ETFs will follow in the months ahead.

Many investors have begun to speculate that many altcoins will also have ETFs, which has led to price appreciation and a lot of speculation. But all the enthusiasm has led many to overlook an obvious contender to Ethereum — Solana, which has beat many expectations and continues to boast a sophisticated tech team.

There has been no shortage of stories talking about Solana and its links to FTX founder Sam Bankman-Fried, with many predicting its demise. However, Solana has weathered the storm according to a handful of metrics. For instance, the number of active addresses on the network has nearly reached its 2022 level. The number of new addresses has continued to grow at almost as fast a rate as it was in 2022 as well. In fact, the number of unique active wallets (UAW) is up from 2022.

And that’s not to mention the reality that active addresses can be manipulated. An alternative metric, namely capital efficiency (i.e., decentralized exchange volume per dollar of total value locked), suggests Solana has outpaced Ethereum in recent months.

To be sure, Solana isn’t operating perfectly, but it has clearly defied the expectations of many who thought it might collapse following the FTX blow-up. A large part of its recovery after FTX, and growth since then, has been driven by seemingly wise leadership, which in turn affects their technological investments, strategy, and ultimately community engagement.

The blockchain technology landscape, particularly at the layer-1 (L1) level, currently falls short of the transformative financial future many envisioned. While the initial promise of blockchain offered a vision of faster, cheaper, more efficient, and censorship-resistant financial systems, the reality today presents significant challenges. 

The L1 landscape is characterized by fragmentation of liquidity across a range of layer-2 solutions (L2s) and an absence of scalability that hampers efficiency and user experience in the decentralized exchange space, coupled with concerns about the degree of centralization in the centralized exchange space. This fragmentation has led to a piecemeal ecosystem where the seamless integration and interoperability necessary for a truly revolutionary financial system remain elusive. As a result, the blockchain community finds itself at a crossroads, seeking solutions that can fulfill the early promises of this technology.

Efforts to scale blockchain technology today are diverse, with each project taking a unique approach to overcome limitations in speed, efficiency, and interoperability. The Ethereum blockchain, for example, is pursuing a multi-layer strategy, incorporating both layer-2 scaling solutions and sharding to increase transaction throughput without sacrificing security.

Meanwhile, projects like Cosmos and Polkadot are exploring a multi-chain architecture that allows for specialized blockchains to communicate and transact seamlessly. Solana, along with newer entrants like Sui and Aptos, proposes an alternative approach, focusing on high throughput and efficiency at the layer-1 level itself. Each of these approaches represents a different path towards achieving scalability, with their own set of trade-offs between decentralization, security, and performance. The variety of solutions underlines the complexity of the scalability challenge and the blockchain community's commitment to finding a way forward.

Solana stands out for its unique approach to addressing the core issues plaguing the blockchain ecosystem and its robust community support — evidenced by its resilience post-FTX and the success of its global hackathons, which underscore the platform's strong foundation. And significant UX improvements, notably with mobile integration through Saga phones and competitive platforms like Jupiter — which rivals Uniswap — make Solana highly accessible.

Solana has also demonstrated its ability to handle finance at scale, offering 400ms block times for finality compared to Ethereum's longer durations, along with initiatives like Firedancer and local fee markets, further exemplify Solana's technological edge. The platform's emphasis on seamless transactions without the need for bridging or dealing with fractured liquidity, coupled with its application in real-world solutions like decentralized physical infrastructure (DePIN), positions Solana as a leader in the blockchain space.

That’s not to say that Solana is guaranteed to surpass Ethereum, or even Bitcoin, but it does mean that Solana is no longer an underdog. And perhaps before any altcoin gets a spot ETF, Solana will have one of its own that will bring greater competition to the blockchain space.

Read More
Christos Makridis Christos Makridis

How much longer can indebted Americans keep buying crypto?

This article was originally published on Cointelegraph.

Despite many seemingly positive reports about retail spending or the unemployment rate in the United States, the nation continues to battle several structural challenges that have only grown more severe, with a historic $34 trillion in public debt and a similar high of $1.13 trillion in consumer credit card debt. Alexander Hamilton is famous for remarking that the "national debt, if it is not excessive, will be to us a national blessing," but the scale of current debt raises questions about the sustainability of fiscal policies and their long-term economic impact.

Concerns about the public debt used to be more of a fringe topic that conservatives and libertarians argued about. However, recent remarks by leading figures in the banking sector underscore the gravity of the situation. JPMorgan Chase CEO Jamie Dimon's warning of a global market "rebellion," Bank of America CEO Brian Moynihan's call for decisive action, “The Black Swan” author Nassim Taleb's "death spiral" prognosis, and former House Speaker Paul Ryan's description of the debt crisis as "the most predictable crisis we’ve ever had" highlight the urgent need for a reassessment of the United States' fiscal trajectory.

The public's growing anxiety over government debt, with 57% of Americans surveyed by the Pew Research Center advocating for its reduction, reflects a shift in societal priorities towards fiscal responsibility. This concern gains further significance in light of its real-world implications, notably on housing affordability and the broader economic landscape. The precarious state of the housing market, exacerbated by rising interest rates, epitomizes the link between fiscal policy and individual economic prospects: as public debt grows, so too do interest rates.

The public's growing anxiety over government debt — with 57% of Americans surveyed by the Pew Research Center advocating for its reduction — reflects a shift in societal priorities towards fiscal responsibility. This concern gains further significance in light of its real-world implications, notably on housing affordability and the broader economic landscape. The precarious state of the housing market, exacerbated by rising interest rates, epitomizes the link between fiscal policy and individual economic prospects: as public debt grows, so too do interest rates.

The global standing of the U.S. dollar, serving as a "convenience yield," plays a pivotal role in the country's ability to manage its substantial debt without immediate negative consequences. However, a recent working paper released through the National Bureau of Economic Research finds that the loss of the dollar's status could amplify the debt burden by as much as 30%. This revelation underscores the imperative to critically evaluate the nation's fiscal direction.

The challenge in the nation — and many other developed countries — reflects what is going on for many consumers. Americans have increasingly turned to their credit cards without paying down the balance to cover regular expenses. A new report released through the New York Federal Reserve, for instance, shows that total credit card debt increased by $50 billion (or 4.6%) to $1.13 trillion from the previous quarter, according to the report, marking the highest level on record in Fed data dating back to 2003 and the ninth consecutive annual increase.

The New York Fed report also shows an uptick in borrowers who are struggling with credit card, student and auto loan payments. For example, 3.1% of outstanding debt was in some stage of delinquency in December — up from the 3% recorded the previous quarter, although it was still down from the average 4.7% rate seen before the Covi-19 pandemic began.

"Credit card and auto loan transitions into delinquency are still rising above pre-pandemic levels," said Wilbert van der Klaauw, economic research advisor at the New York Fed. "This signals increased financial stress, especially among younger and lower-income households."

An important strategy for retail investors during periods of uncertainty is to diversify. But how you diversify matters. Investing in the S&P 500 is good, but if all your savings are locked up in the S&P 500 and it plummets, then you’re in trouble. While it is true that, even if a plunge took place in the next year, the S&P 500 will rebound, but you still have to weather the storm.

An additional strategy is to have some exposure to crypto. Many people focus on Bitcoin, Ethereum, and other digital currencies.But at least equally important — if not more — for long-run value creation in the digital assets market is hash rate, which reflects how much activity is taking place on a blockchain. Bitcoin, for instance, has had a sustained increase in the hash rate alongside its price appreciation.

The upcoming year is an important one with substantial macroeconomic risks for both the nation and the consumer. Although some economic reports have been positive, we need to pay attention to the fundamentals and whether the data reflects transitory versus permanent shocks. The challenge for policymakers is to craft fiscal policies that foster sustainable growth and productivity, steering clear of scenarios where short-term fiscal expediencies precipitate long-term economic liabilities. The current path, however, mirrors the predicament of a borrower trapped in a cycle of debt, with interest rates surpassing their monthly income.

Let’s help make 2024 a transformational year for the better!

Read More
Christos Makridis Christos Makridis

Understanding patterns of cyber conflict coverage in media

This article was originally published on Binding Hook (with Lennart Maschmeyer and Max Smeets).

In February 2014, the cyber threat intelligence community was stirred by the discovery of ‘The Mask‘, a highly advanced hacking group thought to be backed by a national government. This group had been targeting a range of entities, including government agencies and energy companies. Kaspersky Lab, a Russian cyber security company, described their activity as the world’s most advanced APT (Advanced Persistent Threat) campaign. However, despite the sophistication of The Mask, which had been active since at least 2007, its media coverage was surprisingly limited, failing to make significant headlines.

Fast forward to 2018, when Kaspersky Lab reported on Olympic Destroyer, the cyber attack that disrupted the 2018 Olympics, paralyzing IT systems and causing widespread disruption. This incident garnered immediate and extensive media coverage, with over 2000 news stories published, showcasing a stark contrast in the media’s approach to reporting cyber operations.

These two cases highlight a critical and intriguing question: Why do some cyber operations receive extensive media attention while others do not? It is important because media reporting shapes how the public and policymakers perceive the cyber threat landscape. 

Yet, there has been a surprising lack of analytical research addressing why some cyber operations attract more media attention than others. Until now, our understanding has largely been shaped by anecdotal evidence rather than systematic analysis.

Our recently published academic article in the Journal of Peace Research begins to tackle this question by introducing a comprehensive collection of cyber operations reports derived from commercial threat intelligence providers, which are often the primary sources for journalists. Using multivariate regression, we identify the characteristics that correlate with the extent of media reporting on cyber operations. 

Four tests

First, we explored the intensity of effects produced by cyber operations. Historically, violent and shocking news stories have garnered more attention, encapsulated in the adage, ‘if it bleeds, it leads.’ We hypothesized that the more intense and threatening the effects of a cyber operation, the greater the media coverage it would receive. Our findings revealed that disruptive and destructive cyber operations generate more news stories than their espionage counterparts. However, while cyber effect operations receive more coverage than espionage, this result is not statistically significant.

Next, we examined the type of target involved in cyber operations. Previous assumptions paralleled the media coverage of cyber operations with terrorism, where attacks on more politically or symbolically significant targets garner more attention. However, our research indicates a different pattern. We found that operations targeting the military or financial sectors actually generate less media coverage.

The third aspect we considered is the perceived sophistication of cyber operations. The media often gravitate toward stories that are easily understandable and remarkable. In this context, we expected cyber operations employing zero-day exploits, an easily observable indicator of sophistication, to receive more coverage. Our research supports this expectation, showing a significant increase in media stories for cyber operations that use these advanced techniques.

Lastly, we investigated the origin of the threat. Previous studies in communications have highlighted a media tendency toward bias against those outside the audience’s primary demographic, often leading to an exaggerated portrayal of non-white individuals in terrorism-related news.

Extending this insight to the realm of cyber threats, we anticipated a similar pattern, with adversarial threats groups being overrepresented in media narratives. This presumption aligns with past research, which observed that operations attributed to Russia, China, Iran, and North Korea tend to receive more attention.

However, our research does not find a significant correlation between media coverage and cyber operations attributed to key adversaries of Western powers, such as Russia, China, Iran, and North Korea.

Double bias

Our findings reveal a ‘double bias’ in media reporting on cyber operations. This bias originates from the reporting practices of commercial threat intelligence firms, further skewed by media outlets’ preference for stories that resonate with their audiences. This layered selectivity results in a narrow and potentially distorted portrayal of cyber threats, influencing academic discourse and policy-making.

There is a fascinating trend to watch regarding the double bias. Traditionally, mostly Western cyber threat intelligence firms have publicly disclosed details on APTs. Kaspersky Lab, based in Russia, stands out as an exception. The company has also published on various Western covert cyber operations that haven’t been widely reported elsewhere. However, lately, Chinese cybersecurity companies have begun to publicly attribute cyber threat actors as well. If this trend continues, it will be intriguing to observe how the media reacts to these reports and how much they are taken as credible compared to reports from Western intelligence companies. 

Read More
Christos Makridis Christos Makridis

Unpacking the Myths of Employee Ownership

This article was originally published in Inc Magazine with Bill Fotsch.

Last year, Pete Stavros, a senior partner at KKR and the founder of Ownership Works, published an article in Fortune championing shared company ownership as the "missing path to the American dream." And for good reason--an increasing share of Americans believe the American dream has deteriorated, with only 19 percent reporting confidence that their children's generation will be better off than their own, according to a recent NBC poll. Pete's proposal received support not just from the labor advocates but also from the investment community, including major financial institutions. 

We, too, find common ground with Stavros, particularly in improving business results and the lives of the employees that drive those results. But we diverge when it comes to the implementation of employee ownership. The difference, as they say, is in the details--arguably, ones that make or break the success of employee ownership.

Recognizing the many benefits of employee ownership, our perspective emphasizes that it does not automatically produce the desired outcomes on its own. To rise to the American dream level, ownership must be earned, not simply given. In other words, ownership must be realized by gains in productivity and value-added; it cannot sustainably be given out in perpetuity. 

That raises a chicken-or-the-egg question. Let's go back to Corey Rosen's 1987 Harvard Business Review research, which revealed that ESOP (employee stock ownership plan) companies with participation plans grew three to four times faster than those without. The key word here is "participation." Rosen, an otherwise staunch supporter of employee ownership, did not shy away from revealing this detail. For the ESOP to thrive, employees must be involved in the plan and earn the reward.

That was over three decades ago; has the narrative changed?

Take, for instance, the Harvard Business School Working Knowledge article discussing how KKR's ownership model dramatically changed worker behavior and company success. It's a compelling narrative, but it may tempt readers toward an overly utopian view of ESOPs. An employee will not necessarily behave like an owner simply because they are given equity, no more than a pre-med student will behave like a doctor if given a unearned degree. In contrast, ownership is the fruit of stewardship and investment. 

Recent conversations with Gil Hantzsch, President of MSA, reminded us that giving employees ownership changes a company's form more readily than its function. MSA is a thriving ESOP company, yet when they attempted to pool resources and share best practices across their various branches, the ownership model did not automatically encourage collaboration, trust, or shared action. It wasn't until MSA introduced structured interactions--a series of 'flocking events', where subject-matter experts met in person to get to know one another, build trust and share insights--that their best-practices initiative gained traction. Here, the company equity had been in place for years, but was not wholly sufficient to affect behavior.  

If it's clear that a company would do better if its employees began to think and act like owners, and ownership at face-value does not transform employees, then what does? 

The genesis of a successful ESOP doesn't begin with the ESOP itself. It starts with cultivating a culture of ownership among employees, treating them as true partners in the mission to deliver value to customers and ensure sustainable profitability. Many ESOP successes lead back to this fundamental approach; companies such as MSA Engineering, Trinity Products, Springfield Remanufacturing, and Dorian Drake. And there is no shortage of successful companies that never had employee ownership as part of their arsenal; Southwest Airlines had profit sharing long before it had any employee equity program.  

When employees can see and understand the economics of the business, they learn how their day-to-day behaviors influence the bottom line. Then can innovate and contribute. When employees are actively in conversation with the customer, they have insight into what drives the business' value. As confidence builds, employees develop an eye toward long-term strategy. It's algebra, then calculus. This scaffolding ensures employees are prepared for the responsibility of ownership and can make the most of it.

Our five years of research on this management approach, called Economic Engagement, contains 8 waves of 50-150 companies per wave and is published in Inc. ("A Key Strategy to Double Your Profitable Growth"). It includes fifteen questions aimed at understanding drivers behind employee behavior and company success, one of which is employee ownership. While employee ownership is part of the equation, the existing body of research does not single it out as the ultimate driver of performance or employee well-being. It's the combination that produces superior results:

  1. Customer engagement is the starting point since customers define value and thus the economics of any business.  Insure that all employees have a window on what customers value on an ongoing basis, since customers change over time.

  2. Economic understanding aligns all employees in a common understanding of what defines success for the company that evolves from customer engagement and the value they are adding.

  3. Economic transparency enables all employees to see how the company is doing and learn from successes and failures.  

  4. Economic compensation gives all employees a shared stake in the results, making them economic partners in the company.  This ranges from wages, incentive compensation and long-term equity.

  5. Employee participation leads to lower turnover and better relationships between owners/managers and employees, by encouraging employees to actively participate in the business, often beyond their defined role.

At economically engaged companies, employees are immersed in the operational economics that power profitability--metrics like product shipments, monthly job margin dollars, and the acquisition of new customers. Employees learn to track and forecast these key numbers on a weekly basis. They're empowered to steer these numbers in a positive direction, while also reaping the rewards of enhanced performance. They're likely to forge long-lasting and fruitful careers, as well as source quality referrals. The environment elevates the participation of the employee to a level that transcends transactional ownership.

Employee ownership is good, but by itself, it has less impact on employee behavior. It's hard for employees to feel motivated by a potential benefit of an indeterminant amount, at some point in the distant future. It's hard enough to get employees to participate in 401K matching programs. Shared ownership is not a panacea; it's a tool. So, let's agree that we should improve business results and the lives of the employees that drive those results--by learning from each other, from employees, and from research.

Read More
Christos Makridis Christos Makridis

We studied 235 stocks–and found that ESG metrics don’t just make a portfolio less profitable, but also less likely to achieve its stated ESG aims

This article was originally published in Fortune.

Institutions have become increasingly skeptical about ESG ratings–and rightly so. In our recent research, we show how the inclusion of ESG metrics in assembling a portfolio can lead to unintended consequences.

After gathering the subset of stocks that were traded on a daily basis between 1998 and 2020 on the three major exchanges as well as ESG data, we quantitatively studied the inclusion of ESG metrics in two ways. First, we consider trading strategies that only rely on returns, rather than a combination of returns and ESG scores. We found that non-ESG rules that incorporate returns result in higher ESG scores, compared with ESG-based rules.

Second, we considered trading strategies that prioritize the stocks with the highest overall ESG score, reflecting the increased attention that ESG has received in recent years. We found that it does not result in the most efficient portfolio in terms of risk-adjusted returns. While including ESG data leads to portfolios with higher returns, it was at the cost of more volatility.

Our results may come as a surprise: Because of the noise inherent in ESG metrics, including them creates estimation risk and worsens the portfolio allocation. In fact, we find that the explicit targeting of ESG metrics leads to a portfolio allocation that is economically and environmentally worse than the market allocation. That is consistent with prior research that finds substantial disagreement among ESG ratings agencies due to their chosen ESG metrics, how they measure the metrics, and how they weight across the metrics in forming overall scores. Our results are also consistent with recent research that has shown how the inclusion of uncertainty associated with an ESG metric lowers financial returns.

It’s as if you are trying to hit a moving target–you will not only miss the target but also create a mess in the process. Even though the desire to achieve broader impact through ESG is good, the devil is in the details: the measurement and choice of metrics are enormously important, and the absence of clarity and consensus around them will introduce significant noise into investors’ portfolio choice conundrum.

To make further sense of these results and understand how the average American thinks about ESG matters, we surveyed a nationally representative sample of 1,500 people and asked them to rank 10 ESG topics. While we can only speak to the relative ranking of each topic, we find no statistical evidence that individuals believe companies should focus on other priorities besides maximizing shareholder value after accounting for their own ranking of ESG issues.

Furthermore, among those who personally rank issues such as climate change among the greatest priorities, they also recognize that it is not necessarily within a company’s objectives to do so. If anything, respondents tend to rank company objectives around paying a living wage higher than their own personal rankings of it. In this sense, whereas a frequent justification for active ESG policies is that people believe that companies should be doing more, our result says that it is just a reflection of peoples’ own preferences that they superimpose onto the company.

We also conducted a simple randomized experiment where we provided some respondents with information from a scientific study about the costs of renewable energy, in contrast to the control group, to gauge the impact of information on attitudes toward ESG. Then, we asked them about their support for renewable policies. We found that information exposure lowered their support, after learning about what often amounts to overlooked costs. This divergence between personal and organizational ESG objectives, combined with the muddled ESG scoring landscape, reiterates the potential pitfalls of heavily relying on these scores for investment decisions.

An essential takeaway is the need for a balanced approach. While ESG metrics can provide valuable insights into a company’s broader societal impact, they should be seen as a supplement, not a replacement, to traditional financial metrics. Investors should be wary of overemphasizing ESG at the expense of established measures that have stood the test of time.

Christos A. Makridis, Ph.D., is the founder and CEO of Dainamic Banking and holds academic affiliations at Stanford University, among other institutions.

Majeed Simaan, Ph.D., is a professor of finance and financial engineering at the School of Business at Stevens Institute of Technology.

Read More
Christos Makridis Christos Makridis

Researchers reveal the hidden peril of ‘labeling’ employees

This article was originally published on Fast Company.

In today’s hyper-competitive business landscape, the quest to quantify and categorize employee performance is more aggressive than ever. Consulting giants like McKinsey offer provocative frameworks that promise to neatly sort your workforce into boxes, “amplifying the impact of star performers” by identifying six distinct employee groups, or archetypes.

Such categorizations echo the controversial strategies of yesteryears, notably Jack Welch’s “Rank and Yank” policy. Remember how that worked out? Welch’s system had its moment in the sun, but it eventually fell from grace, proving to be a divisive and morale-crushing strategy.

Before you sign that consulting agreement and begin using their employee filtration tools, it’s worth pausing to consider the powerful psychological implications of labeling. We need to talk about the Pygmalion Effect—a concept that suggests these labels could be doing more harm than good.

The Pygmalion Effect refers to the concept that the labels we attach to people can influence their behavior in ways that confirm these labels. Imagine you label someone as a “disruptor.” Over time, not only will they start acting the part, but their managers and colleagues will treat them as such, reinforcing the behavior. In other words, the label becomes a self-fulfilling prophecy, locking individuals into roles that may not reflect their potential or future performance.

For instance, a sales manager who labels a team member as “low potential” might unconsciously offer fewer growth opportunities, affecting the employee’s performance and motivation to step up. Or consider how many talented employees might be pigeonholed into roles that don’t fully exploit their skills, simply because of a label slapped onto them during a performance review.

Here’s the kicker: Employees aren’t static entities. Their performance and engagement levels can change, often dramatically, in response to various factors like work environment and personal circumstances. Management practices alone are shown to affect productivity by around 20%. We’ve seen firsthand how an employee branded as a “value destroyer” turned into a key asset when engaged and motivated properly. To think that an employee’s worth can be permanently categorized is to misunderstand the dynamic nature of human capital.

We have seen how eschewing labels propels results for hundreds of consulting clients, including:

  • A U.K. manufacturer’s owner had labeled the head of their model shop a troublemaker, or “value destroyer” in McKinsey terms. Ignoring this, the owner solicited his input on how to improve the business. The model shop head generated profitable ideas, leading to increased earnings. He emerged a leader, or “thriving star” in McKinsey terms.

  • A Kentucky landscape company viewed its employees as hired hands, or “mildly disengaged” in McKinsey terms. Treating them like trusted partners, with a shared focus and an incentive to increase job margin per month, drastically improved productivity and profits, as well as innovation. One truck driver, running a snowplow, generated a new client on his own by plowing an unplowed parish parking lot, asking only that the pastor take a call from his company sales team. No one told him to do this. With focus and incentive, he transformed from disengaged to “reliable and committed.”

  • An urgent care business had come to assume they were stuck with debilitating turnover; “quitters,” McKinsey might suggest. But after examining exit interviews and addressing the common issues (particularly lack of management listening and acting on provider input), the “quitters” stopped quitting. Patient NPS scores soared, as did profits.

The real cost you pay when working with imprudent consultants isn’t their expensive fees, it’s the potential stifling of employee growth and innovation. When you label people, you’re not just putting them in boxes; you’re putting a ceiling on what they can achieve. And in today’s fast-paced business world, that’s a luxury no company can afford. It turns out, employees can and do change over time, something you can either enhance or stifle.

To be sure, there are some employees who are simply poor performers and not right for the job even when you work with them to explore a change in responsibilities.

Of course, one solution is to screen employees better. Some of our prior research, for example, has found that employees who demonstrate greater intellectual tenacity tend to perform much better than their counterparts, and their advantage in the labor market has grown over time as work has become more complex. One way to think about this result is that persistence and curiosity in the workplace are quintessential characteristics for not only problem solving, but also interpersonal dynamics. But hindsight is always 20-20 and the wrong candidates might pass through the screening.

Instead of spending resources on categorizing employees, why not invest in creating an environment that promotes positive behavior change? By focusing on behaviors rather than labels, companies become more growth-oriented and attentive to what can change.

This fosters a culture where employees are empowered to evolve and adapt, driving not just individual success but also organizational excellence. Our multiple waves of survey research—on what we broadly refer to as Economic Engagement—shows that when companies partner with employees to serve their customers profitably, behavior changes, and that in turn leads to greater profitability.

Economic Engagement isn’t your run-of-the-mill, feel-good company culture. Instead, it’s a well-structured, results-driven management system underpinned by transparency, a deep understanding of economics, and active employee involvement.

Employees aren’t just taught how to read a balance sheet. They’re immersed in the operational economics that power profitability, metrics like product shipments, monthly job margin dollars, and the acquisition of new customers. Employees learn to track and forecast these key numbers on a weekly basis. They’re empowered to steer these numbers in a positive direction, while also reaping the rewards of enhanced performance.

Employees at an economically engaged company are likely to forge a long-lasting and fruitful career there, as well as a source of quality referrals. The environment elevates the participation of the employee to a level that transcends categorization and engenders true engagement.

Before you take any steps to classify your workforce, consider other avenues for understanding and unlocking their potential. Sometimes, the smartest decision is to sidestep the labels and focus on cultivating a culture that brings out the best in everyone.

Christos A. Makridis is the CEO and founder of Dainamic, a financial technology startup, and a research affiliate at Stanford University. He holds doctorates in economics and management science and engineering from Stanford University. Follow @hellodainamic.

Bill Fotsch is a business coach, investor, researcher, and writer. He holds an engineering degree and an MBA from Harvard Business School and is the founder of Economic Engagement.

Read More
Christos Makridis Christos Makridis

The FDIC’s 2023 Risk Review shows the surprising resilience of community banks despite inflation and shifting interest rates

This article was originally published in Fortune.

The Federal Deposit Insurance Corporation (FDIC) recently released its Risk Review for 2023, detailing a substantial increase in unrealized losses–$617.8 billion in the last quarter of 2022 and $515.5 billion in the first quarter of 2023–driven in large part by “declines in medium- and long-term market interest rates.” If banks face a situation where they need liquidity and therefore have to sell investments at a loss, the depreciation of their portfolios could be a deathblow that renders many financial institutions insolvent.

While several asset quality indicators, such as the delinquency rate and nonconcurrent loan rate, remained favorable, the reality is that the banking sector is not in good shape–and declining macroeconomic conditions have exacerbated risk factors. Rising interest rates, coupled with inflation, have simultaneously affected bank balance sheets and consumer debt and expenditures. However, there is an important silver lining in the report: Community banks have fared much better than their larger counterparts–and helped sustain small business lending.

One important metric for gauging the health of a bank is its net interest margin (NIM), which reflects the interest income generated from credit products, such as loans and mortgages, net of outgoing interest payments to holders of savings accounts and certificates of deposits. Although NIMs increased in the industry as a whole in 2022, the increase was concentrated among community banks at 3.45%–up from roughly 3.25% in 2021. Since 2012, community banks have had roughly 0.5 percentage points higher NIMs than their non-community counterparts.

In addition, community banks have played a major role in supporting small business lending. Even though they only hold 14.7% of total industry loans, they accounted for 23.6% of total small business loans in 2022. Moreover, the increase in lending did not come with additional costs of higher risk: The commercial and industrial early-stage, past-due rate, and nonconcurrent rate for community banks actually declined at the onset of the pandemic and has remained low at around 0.5%, compared with roughly the double among non-community banks.

To better understand the health of the banking sector at a higher frequency, I launched a monthly nationally representative banking survey of 1,500 respondents in June. Consistent with these results from the FDIC Risk Review report, I found that individuals who borrow from smaller banks are much more confident in the safety of their deposits. For example, 34.5% of respondents who work with a small bank report that their bank is “rock solid” with “no concerns,” whereas only 26% of those with a medium-sized bank report similarly (and 33% among large banks). There is a growing recognition that small banks are better positioned to maintain the trust and loyalty of their borrowers because their interactions with customers are more frequent and their investments more prudent, particularly in their local communities.

One potential concern with these results is that differences in perception of risk simply reflect differences in the type of borrower. However, all results are robust to controlling for a wide array of demographic factors (age, race, education, marital status, employment status) and the respondents’ overall perception of the banking sector (not their own bank). Furthermore, those who work with a small bank are less optimistic about interest rates and central bank policy overall, so if anything, these results are overly conservative.

As 2023 comes to a close, it is important to remember the important role that small banks play in providing liquidity in the banking system–often with the least amount of risk. Due to their exposure to varying macroeconomic conditions, larger and mid-sized banks will need to pay greater attention to the quality of their assets and the health of their balance sheets.

Christos A. Makridis is the founder and CEO of Dainamic, a financial technology startup that empowers banks with regulatory compliance and forecasting software, in addition to serving as a research affiliate at several leading universities. Christos holds dual doctorates in economics and management science & engineering from Stanford.

Read More