AI challenges and opportunities in SMEs

The recent Oxford Economics research for SAP published by CEPYME highlights a significant trend among SMEs worldwide: the progressive adoption of artificial intelligence (AI) as a key tool for growth. Currently, only a quarter of these companies use AI, but this figure is expected to increase to more than 51% in the next 12 months, indicating a strong movement towards digitalization and automation in the sector.

The study highlights that growth remains a priority for SME leaders. More than a third of respondents prioritize attracting new customers, increasing market share and increasing revenues over the next two years. In addition, innovation in products, services and business models is considered a crucial objective.

Despite these ambitions, small and medium-sized businesses face numerous challenges that can hinder their growth. Among the main obstacles are difficulties finding and retaining talent, macroeconomic problems, and the inability to effectively scale operations and systems. In addition, adapting to new geographies and creating innovative business models are significant barriers.

To address these challenges, SMEs are investing in enterprise resource planning (ERP) and customer relationship management (CRM) solutions. More than half are already using these technologies, and a third plan to adopt them in the coming year. In addition, 73% of these companies have implemented cloud solutions, benefiting from the agility, optimization and cost reduction offered by these technologies.

Innovation is essential for the growth of medium-sized companies. Data integration is considered crucial for creating innovative business models and improving knowledge generation. Most of the companies surveyed also emphasize the importance of continuous digital transformation, with 76% accelerating this process.

Although only a quarter of medium-sized companies currently use AI and machine learning, more than half plan to adopt them in the coming year. These technologies are expected to significantly improve the design and launch of new products and services, as well as the personalization and automation of marketing and sales areas. Inbenta, for example, is a Valencian company that develops AI solutions on a chatbot platform for customer service in retail.

Investment in digital technologies has increased significantly in recent years, driven by the adoption of teleworking and advances in AI. In 2022, organizations in major economies spent more than 2 trillion dollars on digital technologies, with an annual growth rate of 6.4%. This trend is expected to continue, with spending that could double by the mid-2030s.

Benefits and challenges of AI for SMEs

Integrating generative AI solutions offers numerous benefits to SMEs, including:

  1. Improved operational efficiency: Process automation allows companies to reduce costs and increase productivity.
  2. Predictive analysis: Advanced tools can predict market trends and behaviors, facilitating strategic planning.
  3. Personalization: The ability of AI to analyze large volumes of data allows us to offer personalized experiences to customers, improving their satisfaction and loyalty.

However, adopting these technologies also presents significant challenges. SMEs must face the need to invest in adequate technological infrastructure, train their staff and ensure ethical data management. In addition, the rapid evolution of AI requires constant updating of skills and knowledge to remain competitive.

New EU AI regulation: ensuring safe and ethical use

The European Union has made significant progress in regulating AI with the proposal of a new law whose main objective is to mitigate the risks associated with the use of this technology. The law addresses crucial aspects such as the health, safety and fundamental rights of European citizens, establishing a regulatory framework to ensure that AI systems are developed and used in a safe and ethical manner.

The regulation introduces a risk classification for AI systems, dividing them into four categories: unacceptable risk, high risk, limited risk and minimum risk. This classification allows for differentiated regulation, focusing regulatory efforts on those systems that pose the greatest dangers.

  1. Unacceptable Risk: This category includes AI systems that are considered a threat to security, fundamental rights, and privacy. An example is the use of real-time biometric identification systems in public spaces without the explicit consent of individuals. These systems are completely banned under the new law.
  2. High Risk: High-risk systems are those that can have a significant impact on people’s lives, such as those used in critical sectors (health, transportation, education, etc.). These systems must meet strict requirements before being marketed or used, including rigorous evaluations, certifications and the implementation of risk management measures.
  3. Limited Risk: For systems that present limited risk, specific transparency measures are required by law. For example, AI applications that directly interact with users must clearly report that they are using AI. This includes virtual assistants and chatbots, which must be recognizable as such by users.
  4. Minimal Risk: Most current AI applications would fall into this category. These systems pose minimal risk and are therefore subject to fewer regulations. However, the law encourages good voluntary practices and self-regulation in this area.

Transparency is a cornerstone of the law. Developers and operators of AI systems must provide clear and understandable information about how their technologies work, including how automated decisions are made. In addition, documentation and traceability requirements are established to ensure that AI systems can be effectively audited.

The law also reinforces the rights of citizens in relation to AI. Individuals will have the right to receive explanations about automated decisions that significantly affect them, and they may challenge these decisions. In addition, the right not to be subject to decisions based solely on automated processing without significant human review is guaranteed.

Failure to comply with the law can result in severe economic penalties, with fines that can reach up to 6% of the offending company’s annual global turnover. The law also has an extraterritorial applicability, meaning that companies from outside the EU that wish to operate in the European market must comply with these regulations.

While the law imposes significant restrictions, it also seeks to promote innovation in the AI sector in an ethical and safe manner. It is planned to create regulatory sandboxes, controlled environments where companies can test new technologies under the supervision of competent authorities.

Leave a Reply

Your email address will not be published. Required fields are marked *