background Layer 1

Safe AI solutions, smart cities and The Beatles' 1995 track

Regulation news

The United States, Britain and more than a dozen other countries unveiled the first international agreement on how to keep artificial intelligence safe from irresponsible players, with the goal to create AI systems that are “secure by design.” The 18 countries agreed that AI developers need to develop and deploy it in a way that keeps customers and the wider public safe from misuse. Although the agreement is non-binding and carries mostly general recommendations such as monitoring AI systems for abuse, protecting data from tampering and vetting software suppliers, it is a significant step towards a sounder AI development sphere.

Meanwhile in the US, the White House’s issued a new executive order, “Safe, Secure, and Trustworthy Artificial Intelligence.” It is poised to usher in a new era of national AI regulation, focusing on safety and responsibility across the sector. However, some analysts say that the executive order addresses a broad spectrum of objectives for the AI ecosystem and builds upon previous directives related to AI. Its main challenges are a lack of accountability and specific timelines alongside potentially over-reaching reporting requirements. The proposed legislation keeps an outdated approach to regulation in which the government will guide AI’s future on its own. That might be not the case in our ever-changing world where developments come too fast for such an approach.

Not only governments come up with AI regulations, but also trade associations are engaged in this process. The US National Retail Federation released its Principles for the Use of Artificial Intelligence in the Retail Sector. The principles fall into four high-level categories: Governance and Risk Management, Customer Engagement and Trust, Workforce Applications and Use and Business Partner Accountability.

Smart Cities

In Phoenix, US, AI is helping the city from underground. The city’s Water Services Department launched a six-month wastewater treatment pilot program with AI developer Kando. Kando Pulse is a wastewater intelligence platform. The system uses sensors to obtain data from wastewater and translate it into insights that detect any irregularities within sewage. Those irregularities can range from pH issues to high temperatures, caused mainly by illegal dumping at industrial sites. Pollutants such as fuels, solvents and oils can damage wastewater plants and pipes, forcing the city to expend more time, energy and money cleaning and treating the sewer system. The sensors can almost immediately alert city officials when issues arise, allowing a quicker response time to find the culprits of illegal dumping. The Kando Pulse devices work as a tool to make sure only clean wastewater enters the wastewater treatment plants.

In UAE’s Abu Dhabi, Bayanat, a geospatial data products and services provider, uses artificial intelligence to support the emirate's urban planning agenda. The company has a goal of transforming data products into information and providing recurring data-as-a-service products to its clients in the fields of national infrastructure, oil and gas, municipal infrastructure. Bayanat is collecting geospatial data with unprecedented accuracy and detail with the expected launch of new satellites and the continuing improvements in its sensor capabilities. This will support Abu Dhabi's autonomous vehicle and smart city programmes.

Another side of cities is an aid by AI to homelessness. Homelessness is a growing issue in Canada and to tackle the problem, many organizations and government agencies are adopting new technology. One of the tools is artificial intelligence. In some cases, this technology helps municipalities better understand who is at risk of being chronically homeless. Another algorithm uses thousands of data entry points to predict which communities in Canada will see a larger homeless population and the factors leading up to how people became unhoused. As the pandemic was unfolding in 2021, the team of technology and AI experts deployed an algorithm that was able to predict where and by how much homelessness would increase post-COVID. Using data from 60 different municipalities across Canada, the researches gathered publicly available information on shelter usage, hidden homelessness estimates, inflation trends, unemployment numbers and other factors like housing and rental stock. By coming up with different scenarios, the AI was able to predict which cities would likely see an increase in homelessness under these compounding factors.

Industry cases

For more than two decades, Mitsubishi Electric has incorporated technology into all its product lines that enables greater electrification, automation, and sustainability in the home, office, or manufacturing plant. Using data and analytics, Mitsubishi achieves its goals. For example, Mitsubishi Electric Trane HVAC US’s all-electric, all-climate heat pumps offer low- to moderate-income households a path to qualify for Inflation Reduction Act of 2022 incentives, while contributing to a cleaner environment and significantly lowering their utility bills due to greater energy efficiency. Another instance is the company’s AnyMile Drone Management Platform which helps orchestrate drone-based logistics, bringing shippers, drone manufacturers, drone operators, and ancillary service providers onto the same platform to increase efficiency of drone travel and trafficking, and reduce the carbon footprint of drone-based operations.

Another industry case comes from international maritime shipping. Hafnia announced the implementation of the SteelCorr Digital Paint Report (DPR) application on 57 of its ships to enhance paint maintenance and reporting. Effective paint maintenance is crucial for preventing corrosion and minimizing downtime and material replacement. The use of DPR simplifies and automates reporting processes, reducing crew obligations, and added workload. Moreover, Hafnia employs DPR’s AI to monitor corrosion levels onboard, detecting, and alerting staff to increased corrosion in specific areas of the ship. The adoption of the DPR app furthermore aligns with Hafnia’s sustainability goals, as usage of the AI APP aims to reduce paint wastage, enhance transparency in data monitoring, and extend asset lifespans, and prioritizing at sea teams’ well-being.

In Germany Schleswig-Holstein Netz AG is a large utilities provider. Every utility line in the electricity, gas, communications and heating sectors that has been laid in the network of SH Netz is precisely documented in the company’s geographical information system. These records must be kept up-to-date, ideally on a daily basis. Approximately 1,000 colleagues regularly access the GIS in their day-to-day work, such as when the relocation of a new line is planned and carried out or a fault has to be rectified. The challenge was to digitize diagrams of the lines in a reasonable time in order to keep the records up to date. For this reason, SH Netz recognized at an early stage the need for a daily overview of the number of diagrams to be captured for process control. SH Netz decided to replace its previous visualization tool with Microsoft Power BI. Although complicated at the beginning, the solution on Power BI was converted to SQL views for data integration, providing simplified and secure access to database tables. Now SH Netz has a product that is reliable, up to date and provides added value for both employees and partners. The system displays the daily update in less than 20 minutes, down from 2 to 4 hours. Data can be refreshed daily several times, providing the latest information to users.

In the classrooms

A group of academics in the field of media and communication teaching in South African universities decided to understand how university students were using generative AI and AI-powered tools in their academic practices. They administered an online survey to undergraduate students at five South African universities. The results suggest that the moral panics around the use of generative AI are unwarranted. Students are not hyper-focused on ChatGPT. 41% of respondents indicated that they primarily used a laptop for their academic work, followed by a smartphone (29.8%). Only 10.5% used a desktop computer and 6.6% used a tablet. Students tended to use a range of other AI-powered tools over ChatGPT, including translation and referencing tools. With reference to the use of online writing assistants such as Quillbot, 46.5% of respondents indicated that they used such tools to improve their writing style for an assignment. 80.5% indicated that they had used Grammarly or similar tools to help them write in appropriate English. Fewer than half of survey respondents (37.3%) said that they had used ChatGPT to answer an essay question. Students acknowledged that AI-powered tools could lead to plagiarism or affect their learning. They indicated that these tools could help to: clarify academic concepts; formulate ideas; structure essays; improve academic writing; save time; check spelling and grammar; clarify assignment instructions; find information or academic sources; summarize academic texts; guide students for whom English is not a native language to improve their academic writing; study for a test; paraphrase better; avoid plagiarism; and reference better.

Vendor news

MicroStrategy, a BI vendor, has continued its steadfast commitment to Bitcoin by purchasing an additional 155 BTC at a total cost of $5.3 million. The company initially entered the Bitcoin market in August 2020, making a significant initial investment of $250 million into BTC. Since then, they have consistently added to their Bitcoin holdings, creating a treasury reserve strategy that has garnered over 158,400 bitcoin worth more than $5.4 billion at the time of writing.

SAP has unveiled new tools to build AI into business applications across its software platform, including new development tools, database functionality, AI services, and enhancements to its Business Technology Platform, BTP. The new suite brings together a number of SAP’s existing design and run-time services when it becomes generally available in early 2024. It will be optimized for development in Java and JavaScript, although it’ll also interoperate with SAP’s proprietary ABAP cloud development model, and will use SAP’s Joule AI assistant as a coding copilot.

From the world of science

A study by scientists at Catalent Biologics and industrial artificial intelligence firm, Quartic.ai, suggests that machine-learning models could help drug firms remove viral contaminants from products more effectively as long as they have access to high-quality training data. Anion exchange chromatography (AEX) is an effective tool for virus removal. The approach separates molecules based on charge using a positively charged resin to attract negatively charged molecules present in the process stream. In the study, the researchers used data from viral clearance experiments performed by Catalent over the past 30 years to train a model to predict how various AEX protocols would perform. The 104 training data points included information from 30 recombinant protein products and two model viruses: the mouse minute virus and the xenotropic murine leukemia virus. To test the resulting ML model, the team compared generated virus log reduction values with data points from the training data set and found that for approximately 70% of the data points the predicted values closely aligned with the experimental values. Having detailed data was vital, since firms interested in using ML in this way will need the same sort of high-quality information to achieve comparable results.

From complicated bio-science to geography. With dozens of active volcanoes in the Alaska-Aleutian region, it can be extremely challenging to identify the source volcano of a distal ash deposit. This limits the utility of marine sediment cores, even though they record numerous ash layers, for evaluating the hazard associated with each volcano. A new study involving ML, examines whether the major element or the trace element composition of ash fragments forms a more sensitive indicator and then develop a model based on an ensemble of ML strategies that uses ash trace element measurements to match layers to source volcanoes. The ML model is trained with the relatively short Holocene record of proximal deposits whose source volcanoes are known. It is then applied to reveal, over a much longer timescale, which volcanoes most often produce large explosive eruptions that may constitute major regional hazards.

Also, in November hydrologists and computer network researchers from the University of Padova in Italy collaborated on a proof-of-concept ML model that can make hydrological predictions. Hydrology models already exist but those traditional models are mathematically complex and require too many input parameters to be feasible. Using ML, researchers were able to train a model that could, using the first 30 minutes of a storm, predict occurrences of water runoff or flooding up to an hour before they might happen. Researchers trained their ML model with input-data parameters like rainfall and atmospheric pressure obtained from sensors at weather stations. Their output-data parameters, like soil absorption and runoff volume, were a combination of data they collected and additional synthetic data generated using traditional theoretical models. Synthetic data was necessary because there is a lack of the kind of data necessary to build dependable machine-learning models for hydrology. The lack of data is the result of current data-collection practices. Currently, hydrological data is collected using sensors at predetermined time intervals—usually every few hours, or even days. This method of data collection is inefficient because only a small proportion of the collected data is useful for modeling. Precipitation like rain or snow happens relatively infrequently, so sensors may not record any data at all during a downpour. And when they do, they usually won’t have enough data points to capture a storm’s progression in much detail. In a new study, researchers suggest that more sensors and a variable rate of data collection may help solve the problem.

Another song, eh?

November marked the release of the first “new” song by the Beatles since 1995. “Now and Then” is available on streaming services. Paul McCartney and Ringo Starr turned to breakthrough technology and ML to get together a finished track from an old lo-fi John Lennon recording. The Beatles first attempted to make something from Lennon’s “Now and Then” demo in the mid-‘90s. They successfully completed “Free as a Bird” and “Real Love,” layering full-band arrangements on top of Lennon’s demos. The pivotal moment came earlier this decade, when director Peter Jackson was working on his comprehensive Get Back documentary for Disney Plus. His team developed a technology that allowed them to take practically any piece of music and “split all the different components into separate tracks based on machine learning.” That is “Now and Then” was given a new life.

We use cookies for analytical purposes and to deliver you the best experience with our website. Continuing to the site, you agree to the Cookie Policy.