Posts tagged with "artificial intelligence"

Andrew Exner, a graduate research assistant in Purdue’s Motor Speech Lab, is working to help Parkinson’s patients during the COVID-19 pandemic as announced by 360 MAGAZINE.

AI Technology Helps Parkinson’s Patients During COVID-19

The COVID-19 pandemic is leading a Purdue University innovator to make changes as she works to provide new options for people with Parkinson’s disease.

Jessica Huber, a professor of Speech, Language, and Hearing Sciences and associate dean for research in Purdue’s College of Health and Human Sciences, leads Purdue’s Motor Speech Lab. Huber and her team are now doing virtual studies to evaluate speech disorders related to Parkinson’s using artificial intelligence technology platforms.

Huber and her team have been working to develop telepractice tools for the assessment and treatment of speech impairments like Parkinson’s disease. They received a National Institutes of Health small business innovation and research grant to develop a telehealth platform to facilitate the provision of speech treatment with the SpeechVive device, which has received attention at the Annual Convention of the American Speech-Language-Hearing Association.

In the current study, Huber and her team are collaborating with a startup company, Modality AI, which developed the AI platform that will be used in the study.

“The application of the technology we are evaluating may lead to far-reaching insights into more standardization in assessments, earlier diagnoses and possibly an easier way to track discrete changes over time to guide interventions,” said Andrew Exner, a graduate research assistant in the Motor Speech Lab. “My personal research passion, and the mission of our lab, is to find ways to improve the quality of life for people with Parkinson’s and other related diseases.”

Exner is leading the virtual study for participants across the country to evaluate an AI platform that can collect and automatically measure the speech skills of people with Parkinson’s disease. The need for AI platforms is increasing as the use of telepractice explodes as a result of the COVID-19 pandemic.

“My interest in speech-language pathology actually started during my training as an actor and opera singer,” Exner said. “I saw the effects of pathology on the voice and wanted to extend that interest into speech disorders.”

SpeechVive Inc. is an Indiana startup company based on Huber’s research. The company has developed a wearable medical device to improve the speech clarity of people with Parkinson’s.

Anyone interested in learning more about the virtual studies or taking part, can email Exner at exner@purdue.edu.

Image courtesy of Purdue University.

Global AI Spending to surge by 120% and hit $110bn by 2024

By Jastra Kranjec

Recent years have witnessed a swell in the adoption of artificial intelligence solutions, revolutionizing industries, and helping businesses boost growth. The rising volume and complexity of business data are set to continue driving AI adoption in the following years, causing a surge in global AI spending.

According to data presented by BuyShares.co.nz, global artificial intelligence spending is expected to surge by 120% and hit $110bn by 2024.

Global AI Spending Jumped 33% YoY, Despite COVID-19 Crisis

Businesses across the world use AI technology to be innovative and scalable. Using automation, deep learning, and natural language processing can improve their decision-making, efficiency, speed, and help predict trends.

In 2015, companies and organizations worldwide spent $5bn on implementing AI systems in their business, revealed the IDC 2020 Worldwide Artificial Intelligence Systems Spending Guide. In the next three years, this figure jumped five times to $25bn. Statistics show that 2019 witnessed a $37.5bn worth of investments into AI business solutions, a 650% jump in four years.

Increased investments in AI technology continued in 2020, with organizations expected to invest $50.1bn in AI systems, despite the COVID-19 crisis. The following years are set to witness remarkable growth in global AI spending, with the figure surging by almost 120% to $110bn by 2023. 

Automated customer service, sales process automation, automated threat intelligence and prevention, and IT automation were the leading use cases for AI in 2020, accounting for nearly a third of total AI spending this year. However, the IDC data show that automated human resources, IT automation and pharmaceutical research and discovery are the fastest-growing use cases.

Life Sciences and Retail Lead in Adoption of AI

The IDC data indicate the retail industry and the banking sector are expected to spend the most on AI solutions in 2020. The retail companies primarily focused their AI investments on improving customer experience via chatbots and recommendation engines. Banks are expected to keep investing in AI-driven fraud prevention and program advisors. Discrete manufacturing, process manufacturing, and healthcare round out the top five industries for AI spending this year.

The life sciences sector, including biotech, pharma and biomedical companies, has the most significant share of organizations that have adopted AI, revealed the Capgemini`s AI-Powered Enterprise survey.

Statistics show that 67% of organizations operating in this market adopted AI at scale, while another 33% launched AI pilots that are still undeployed in production. The retail industry ranked second, with 51% of companies utilizing artificial intelligence technology. The consumer products sector follows with a 44% share.

The Capgemini data show the automotive industry represents the fourth-leading sector, with 17% of companies successfully using AI in production. Another 49% of automotive companies have deployed a few use cases in production on a limited scale. The telecom industry follows, with a 14% and 57% share, respectively.

The full story can be read here: https://buyshares.co.nz/2020/10/20/global-artificial-intelligence-spending-to-surge-by-120-and-hit-110bn-by-2024/

AI brain chip illustration by UMD featured in 360 MAGAZINE.

UMD iSchool to Investigate AI Tech for Intelligence Analysts

University of Maryland College of Information Studies (UMD iSchool) researchers, led by principal investigator Dr. Susannah Paletz, have been awarded a three-year $616,700 grant funded by the Army Research Office (ARO). ARO Program Manager Dr. Edward Palazzolo overseas it. This project examines how teams of intelligence analysts can work together and with artificial intelligence (AI). AI has the potential to support intelligence analysts in reviewing potentially hundreds of thousands of source documents, pulling out key findings, and assembling them into actionable intelligence. AI can also aid in the flow of information and projects among members of the intelligence team, improving the efficiency and accuracy of their work.

“AI-driven technology has sometimes been touted as a replacement for human intelligence,” said Dr. Adam Porter, the project’s co-principal investigator, professor at the UMD Department of Computer Science, and Executive and Scientific Director of the Fraunhofer USA Center for Experimental Software Engineering (CESE). “In practice, however, AI doesn’t always work, or gives limited or biased answers. Human oversight is still required, and it’s therefore critical that we deeply understand how humans and AI can work best together.”

The Human-Agent Teaming on Intelligence Tasks project coordinated through the iSchool will focus on two particular research areas; 1.) how interactive AI agents, such as chatbots, have the ability to mitigate or exacerbate the communication and coordination problems that can occur with shift handovers of intelligence work, such as inaccuracy blindness and overlooking potentially relevant information, and 2.) examining how humans could potentially deal with these blind spots, biases, or inaccuracies.

The Experiment

The research team plans to develop an experimental infrastructure to help test team cognition challenges within the work completed by intelligence analysts consisting of task-relevant input materials, such as mission descriptions and source documents, activity recording tools, experimental monitoring capabilities, and different AI supports for human analysts, such as chatbots offering advice on a particular task.

“We want to develop a task that can raise the problems with asynchronous team cognition in intelligence tasks, but is simple enough to be used by research participants with minimal training,” said Dr. Susannah B.F. Paletz, research professor at the UMD iSchool, and affiliate at the UMD Applied Research Laboratory for Intelligence and Security (ARLIS).

This task will substantially increase insight into the strengths and weaknesses of AI technology to support intelligence task. In addition, it will help shed light on how and when human analysts can safely place their trust in AI technology. Also, how they can proactively identify problems in AI-generated input. It will also aid teams of humans, including asynchronous teams, working together in situations that include AI-generated input.

“This basic research is an important step in the early process of learning how humans and agents can collaboratively become a single team with considerably greater capacity and productivity than human only teams,” Palazzolo said. “Moreover, this research has broad implications into the work of many teams focused on knowledge work and information management such as medical teams involved in shift work, collaborative software development teams, and research teams.”

Collaborators

In addition to Porter, the Fraunhofer USA team also includes Dr. Madeline Diep, Senior Scientist, and Jeronimo Cox, Software Developer, at Fraunhofer USA CESE. The Fraunhofer USA team will lead the effort to create configurable AI agents used in the experimental tasks. Also, it will create a data collection and analysis infrastructure for capturing and understanding participant behaviors.

The UMD iSchool team includes graduate students Tammie Nelson, a fourth year PhD student, Melissa Carraway, incoming first year PhD student, and Sarah Valhkamp, incoming first year PhD, in Information Science.

The grant proposal team includes UMD Office of Research Administration Contract Manager, Stephanie Swann; iSchool Business Manager, Jacqueline Armstrong. Also, former iSchool Business Manager, Lisa Geraghty.

Outside of UMD, Dr. Aimee Kane, the Harry W. Witt Faculty Fellow and an Associate Professor of Management in the Palumbo-Donahue School of Business at Duquesne University, will be a consultant and an intellectual contributor on this project.

ARO is an element of the U.S. Army Combat Capabilities Development Command’s (CCDC) Army Research Laboratory. The Human-Agent Teaming on Intelligence Tasks project (grant no. W911NF-20-1-0214) runs through June 30, 2023.

About the University of Maryland College of Information Studies

Founded in 1965 and located just outside of Washington, D.C., the University of Maryland College of Information Studies (UMD iSchool) is top-ten ranked research and teaching college in the field of information science. UMD iSchool faculty, staff, and students are expanding the frontiers of how people access and use information and technology in an evolving world – in government, education, business, social media, and more. The UMD iSchool is committed to using information and technology to empower individuals and communities, create opportunities, ensure equity and justice, and champion diversity.

For more information click here.

Combating False COVID-19 Information

Fake news websites could be identified by the partners that provide their video streaming and advertising, research shows.

The new approach could help search engines and social media giants, such as Facebook and Twitter, to flag untrustworthy articles more rapidly and prevent their misleading content going viral.

Fake news stories and conspiracy theories have proliferated online during the coronavirus pandemic, from alleged cures to claims the virus is caused by 5G technology.

Ram Gopal, Professor of Information Systems Management at Warwick Business School, said:

“The US Presidential Elections in 2016 highlighted the significant harm that fake news can do, potentially impacting election outcomes and undermining democratic institutions.

“These concerns have multiplied during the coronavirus pandemic and fake news has resulted in an untold number of deaths from misleading and harmful information.

“It is vital that we use all the tools at our disposal to combat the spread of fake news and the huge damage it does.”

Flagging fake news stories typically involves humans or Artificial Intelligence (AI) scanning individual articles for misleading information.

This is difficult as articles can be quickly changed to defeat the algorithms designed to catch them.

Now researchers at Warwick Business School have found fake news and clickbait websites tend to use the same supply chain partners to provide key components, such as advertising platforms.

While fake news websites can disguise their text and images to appear real, they cannot conceal which partners they use, as these can be easily identified using browsing tools.

Professor Gopal said: “Trying to identify fake news articles is a cat and mouse game, because the content can be quickly changed to defeat the algorithms searching for them.

“To detect fake news effectively we need strong markers that are difficult to hide or fudge.

“A website’s choice of third-party partners exposes the essence of what the website does and how it achieves that. A tiger cannot hide its stripes.”

The researchers compared more than 450 top news websites, as listed by alexa.com, with 50 fake news websites and 50 clickbait sites identified by the Harvard University Library.

They identified 115 significant third parties that were only used by trustworthy sites and seven that were only used by untrustworthy platforms. These markers helped identify untrustworthy websites more rapidly and efficiently, with an accuracy of 94%.

The findings were published in the paper, Real or not? Identifying untrustworthy news websites using third-party partnerships, in the journal ACM Transactions on Management Information Systems.

Professor Gopal said: “It is not a case of these markers replacing traditional content analysis. The two approaches can complement each other.

“Content can be scrutinised more closely, with a lower threshold for action, if it originates from a website that our markers classify as illegitimate.  Integrating the two approaches could result in more accurate and more robust detection mechanisms.”

Roybi Robot, 360 MAGAZINE, ai, tech, kids, children, youth, school

ROYBI ROBOT – AI-powered EdTech

A growing number of states say their schools will stay closed for the rest of the 2019-20 academic year to stem the Coronavirus outbreak. At Roybi Robot, a leader in AI-powered EdTech and personalized education, they know first hand the importance of AI in connection with remote education and learning.

At ROYBI, they’re already noticing a big shift towards remote learning due to the recent circumstances and headlines. And throughout this all, one thing seems inevitable: school settings, as they stand today, will change. Online and remote learning will be systems that educational institutions will adopt for future emergencies. They envision a future where the new culture of learning begins at home through devices with sophisticated AI technology such as Roybi Robot. Artificial Intelligence allows educators to follow the child’s progress in a smarter way and provides a personalized approach to each child individually. Additionally, it provides a closer collaboration between parents and educators, because it can connect in a joint force to education.

With many uncertainties around the school closures, many educators have already started approaching distance and remote learning in the long term, but lack of personalized attention and progress tracking has been a major challenge for them. The role of artificial intelligence becomes even more significant for a modern world as it can monitor each child individually and provide feedback to educators more accurately than traditional approaches.

At Roybi, they are NOT saying to eliminate school and the classroom. They are saying that to save time and cost, we can be educating children more at home (by the educators) and use AI to personalize the educational experience for each child. They envision a future where they can connect learners, parents, educators, and even their Roybi Robots together while creating an engaging and interactive learning experience.

Aiways, 360 MAGAZINE

AIWAYS RE-STARTS PRODUCTION

AIWAYS, the Shanghai-based personal mobility provider, is to start taking online orders for its U5 all-electric SUV from European consumers from the end of April. Secured via a small deposit, the U5 will be offered exclusively via a direct-to-customer sales model, and not retailed or leased through traditional dealerships. European pre-sale markets and the required deposit amount will be announced by AIWAYS in April.

Alexander Klose, Executive VP Overseas Operation at AIWAYS, commented: “Online pre-sales represents the next important phase of AIWAYS’ entry into the European market. It’s our promise to customers that for only a small deposit they can be among the first to receive the U5 and start enjoying the benefits of a long range, high-tech and well equipped electric SUV.”

Meanwhile, AIWAYS has re-started production of the U5 at its manufacturing facility in Shangrao, China, following the interruption caused by COVID-19 (coronavirus). Production of the European U5 will start in July, with the first deliveries now slated for August 2020.

Making the most of its agility and flexibility as a startup, AIWAYS is adapting its pre-sale marketing activities to better suit the enforced period of ‘contactless’ retail because of COVID-19. By introducing new online platforms and seamless digital experiences, AIWAYS will give European car buyers the confidence to order the U5 online. More details to follow soon.

Rice University x SLIDE

Deep learning rethink overcomes major obstacle in AI industry SLIDE is first algorithm for training deep neural nets faster on CPUs than GPUs

Rice University computer scientists have overcome a major obstacle in the burgeoning artificial intelligence industry by showing it is possible to speed up deep learning technology without specialized acceleration hardware like graphics processing units (GPUs). Scientists from Rice, supported by collaborators from Intel, will present their results today at the Austin Convention Center as a part of the machine learning systems conference MLSys.

Many companies are investing heavily in GPUs and other specialized hardware to implement deep learning, a powerful form of artificial intelligence that’s behind digital assistants like Alexa and Siri, facial recognition, product recommendation systems and other technologies. For example, Nvidia, the maker of the industry’s gold-standard Tesla V100 Tensor Core GPUs, recently reported a 41% increase in its fourth quarter revenues compared with the previous year.

Rice researchers created a cost-saving alternative to GPU, an algorithm called “sub-linear deep learning engine” (SLIDE) that uses general-purpose central processing units (CPUs) without specialized acceleration hardware.

“Our tests show that SLIDE is the first smart algorithmic implementation of deep learning on CPU that can outperform GPU hardware acceleration on industry-scale recommendation datasets with large fully connected architectures,” said Anshumali Shrivastava, an assistant professor in Rice’s Brown School of Engineering who invented SLIDE with graduate students Beidi Chen and Tharun Medini SLIDE doesn’t need GPUs because it takes a fundamentally different approach to deep learning. The standard “back-propagation ” training technique for deep neural networks requires matrix multiplication, an ideal workload for GPUs. With SLIDE, Shrivastava, Chen and Medini turned neural network training into a search problem that could instead be solved with hash tables, his radically reduces the computational overhead for SLIDE compared to back-propagation training. For example, a top-of-the-line GPU platform like the ones Amazon, Google and others offer for cloud-based deep learning services has eight Tesla V100s and costs about $100,000, Shrivastava said.

“We have one in the lab, and in our test case we took a workload that’s perfect for V100, one with more than 100 million parameters in large, fully connected networks that fit in GPU memory,” he said. “We trained it with the best (software) package out there, Google’s TensorFlow, and it took 3 1/2 hours to train.

“We then showed that our new algorithm can do the training in one hour, not on GPUs but on a 44-core Xeon-class CPU,” Shrivastava said.

Deep learning networks were inspired by biology, and their central feature, artificial neurons, are small pieces of computer code that can learn to perform a specific task. A deep learning network can contain millions or even billions of artificial neurons, and working together they can learn to make human-level, expert decisions simply by studying large amounts of data. For example, if a deep neural network is trained to identify objects in photos, it will employ different neurons to recognize a photo of a cat than it will to recognize a school bus. “You don’t need to train all the neurons on every case,” Medini said. “We thought, ‘If we only want to pick the neurons that are relevant, then it’s a search problem.’ So, algorithmically, the idea was to use locality-sensitive hashing to get away from matrix multiplication.”

Hashing is a data-indexing method invented for internet search in the 1990s. It uses numerical methods to encode large amounts of information, like entire webpages or chapters of a book, as a string of digits called a hash. Hash tables are lists of hashes that can be searched very quickly.

“It would have made no sense to implement our algorithm on TensorFlow or PyTorch because the first thing they want to do is convert whatever you’re doing into a matrix multiplication problem,” Chen said. “That is precisely what we wanted to get away from. So we wrote our own C++ code from scratch.”

Shrivastava said SLIDE’s biggest advantage over back-propagation is that it is data parallel.

“By data parallel I mean that if I have two data instances that I want to train on, let’s say one is an image of a cat and the other of a bus, they will likely activate different neurons, and SLIDE can update, or train on these two independently,” he said. “This is much a better utilization of parallelism for CPUs.

“The flipside, compared to GPU, is that we require a big memory,” he said. “There is a cache hierarchy in main memory, and if you’re not careful with it you can run into a problem called cache thrashing, where you get a lot of cache misses.” Shrivastava said his group’s first experiments with SLIDE produced significant cache thrashing, but their training times were still comparable to or faster than GPU training times. So he, Chen and Medini published the initial results on arXiv in March 2019 and uploaded their code to GitHub. A few weeks later, they were contacted by Intel.

“Our collaborators from Intel recognized the caching problem,” he said. “They told us they could work with us to make it train even faster, and they were right. Our results improved by about 50% with their help.” Shrivastava said SLIDE hasn’t yet come close to reaching its potential.

“We’ve just scratched the surface,” he said. “There’s a lot we can still do to optimize. We have not used vectorization, for example, or built-in accelerators in the CPU, like Intel Deep Learning Boost. There are a lot of other tricks we could still use to make this even faster.”

He said SLIDE is important because it shows there are other ways to implement deep learning.

“The whole message is, ‘Let’s not be bottlenecked by multiplication matrix and GPU memory,'” Shrivastava said. “Ours may be the first algorithmic approach to beat GPU, but I hope it’s not the last. The field needs new ideas, and that is a big part of what MLSys is about.”

Additional co-authors include James Farwell, Sameh Gobriel and Charlie Tai, all of Intel Labs.

The research was supported by the National Science Foundation (NSF-1652131, NSF-BIGDATA 1838177), the Air Force Office of Scientific Research (FA9550-18-1-0152), Amazon and the Office of Naval Research.

MLSys paper
Rice University News on Twitter

About Rice Univerisity

Located on a 300-acre forested campus in Houston, Rice University is consistently ranked among the nation’s top 20 universities by U.S. News & World Report. Rice has highly respected schools of Architecture, Business, Continuing Studies, Engineering, Humanities, Music, Natural Sciences and Social Sciences and is home to the Baker Institute for Public Policy. With 3,962 undergraduates and 3,027 graduate students, Rice’s undergraduate student-to-faculty ratio is just under 6-to-1. Its residential college system builds close-knit communities and lifelong friendships, just one reason why Rice is ranked No. 1 for lots of race/class interaction and No. 4 for quality of life by the Princeton Review. Rice is also rated as the best value among private universities by Kiplinger’s Personal Finance.

AI and Humans: Super Bowl Ads Explore Relationship

Voice Tech Zeitgeist: SuperBowl Ads Reveal Our Complex, Ever Evolving Relationship with AI

By Eric Turkington, RAIN

SuperBowl spots reveal barometers of what the world’s biggest brands think the American public wants to hear. And in 2020, perhaps more starkly than ever, SuperBowl ads telegraphed the complicated relationship we humans have with our AI counterparts.

SuperBowl advertisers often converge around common themes each year based on the prevailing sentiment from embracing nostalgia to championing social purpose to retaining our humanity amidst technological revolution. Striking about the several commercials that featured voice AI in 2020 was how different they were, with each revealing a distinct belief, fear or hope that we harbor about this technology as it becomes an ever more central in our lives.

Here’s a breakdown of wildly different takes I saw about the role of voice assistants at the dawn of a decade.

Amazon goes for humor to reinforce modern AI dependence. Amazon’s Alexa ad tapped
celebrity star power to explore a hypothetical: Real life couple Ellen and Portia wonder what life was like before Alexa. Clearly no expense was spared to imagine humorous takes on this question across a range of faux historical settings, from court jesters to bottle blowing musicians. The ad reinforces the notion of servility:

Alexa is the agent serving the human master while also overtly calling attention to the humanness of the voice assistants’ name (every vignette includes a person with a name that begins with A-L. This ad touches on two controversial questions in voice AI: First, should we be teaching our children to treat voice assistants as fundamentally less than human, worthy of subjugation of our every request? Secondly, was it fair to people named Alexa to have their names be co-opted by Amazon for a voice assistant positioned broadly in popular culture as a servant? Lauren Johnson, founder of Alexa, who is a human, certainly would have a thing or two to say here.

Google tugs at heartstrings by showing an emotional side of voice AI. Considered by many to be among the best of this year’s crop, Google’s “Loretta” tapped into the emotionally raw and relatable circumstance of dealing with a loved one’s death. A man uses Google Assistant–the name is never mentioned in the creative–to remember advice his wife gave him and to pull up memories of their time together. In contrast to Alexa’s portrayal, Google Assistant is playing the role of supportive companion and memorialist. This isn’t the subjugation of AI for menial tasks, but for an elevated purpose that augments the relationship we have with one another, whether living or dead.

Snickers raises that ole eavesdropping concern. Snickers used a generic voice assistant as one of many antagonists in a broader tableau of internet-gone-wrong. An older man sings “the surveillance state’s got a brand new trick,” to which a female voice assistant inside a speaker remarks, coldly, “I am not spying.” The moment was fleeting, but it’s nonetheless telling that the notion of spying smart speakers is a part of the dystopian tech narrative as selfie culture, sexting, and adult scooters.

Coca-Cola makes voice a tactical channel. Coca-Cola’s spot touting its new energy drink did not directly make reference to voice assistants, but Alexa has been among the biggest part of the launch campaign for the same product. Before the ad ran on SuperBowl day, Coke launched a large-scale sampling campaign and leveraged Alexa as a channel for consumers. Using the command “Alexa, order Coke Energy”, consumers would get a free sample of the new product, all which reportedly sold out before the game. While the ad creative was devoid of calls-to-action on Alexa, Coke made savvy use of voice as a sampling strategy to build buzz for the product before its big SB debut. Perhaps if they had a few (million) more samples on hand, they would have included an Alexa call-to-action at the end of the spot

Voice AI has become —and will be even more so — an indelible part of our culture. As voice is able to do more, the references to voice may well become less thematic and topical and even more practical and functional. Indeed, the promoted utterance might be the most prominent hashtag in 2021

Eric Turkington is the VP of strategic partnerships at RAIN, a firm specializing in voice strategy, design and development.

6 ways AI can help reduce business spend

There’s a lot that can go wrong in the typical organization’s spend audit process. Manually auditing vendor invoices and employee expense reports is time-consuming and frustrating. Most companies resign themselves to conducting partial audits, which might catch a few discrepancies, but leaves your company at risk for errors, waste, and fraud. 

Luckily, there’s a solution: Artificial intelligence. In our new ebook, Artificial Intelligence in Spend Auditing For Dummies, we cover how AI can improve your audit processes. Below are six ways AI can help reduce business spend. 

1. Audit 100% of spend

At most organizations, the idea of humans manually reviewing every invoice and expense report is laughable. There are too many reports, too few people, and too many other responsibilities pulling at auditors’ time. Luckily, one of AI’s many superpowers is its ability to comb through documents and evaluate risk factors near-instantly. When an invoice comes in, AI systems can immediately check if its terms match those in the contract. Similarly, when an expense report is submitted, AI can look to see if it contains violations (e.g., duplicate receipts or out-of-policy spending); it’ll flag the reports with a problem for further investigation by your team and initiate an (immediate!) reimbursement for low-risk reports. Ultimately, a comprehensive audit process means a significant reduction in leakage, plus a faster process. 

2. Sniff out T&E misuse

In most companies, travel and entertainment (T&E) is the second largest controllable business expense after salaries and benefits. It’s also particularly hard to manage, given that there are so many small expenses continuously rolling in from many different sources. In our data, we’ve found that a whopping 10% of T&E expenses are either fraudulent or a mistake. We’ve heard of employees expensing everything from tattoos, to dog kennels, to strip clubs, to jewelry, and more. Other common violations include claiming personal trips as business-related, upgrading tickets to first class, expensing weekend meals with friends, and more. AI can help you track down these problems, ensure the incorrect expenses aren’t paid out, and give you the information you need to address any large-scale issues.

3. Double-check that invoices match the contract terms

Many organizations have procurement teams whose whole job it is to negotiate favorable contract terms with vendors. But too often that effort is squandered once the contract is signed, as AP teams may not have the bandwidth to check that the invoice matches the agreed-upon terms. AI can do this automatically with every invoice received, instantly checking to make sure early payment, loyalty, and/or quantity discounts are applied. 

4.Don’t let fraud slide

Unfortunately, invoice and expense report fraud is common and can have a not-so-small impact on your company’s bottom line. Shell companies might bill for services that were never provided, or send fraudulent invoices that are part of a larger phishing scam. Employees might submit the same dinner receipt as a colleague, knowing that they’ll likely both be reimbursed, causing you to be foot the bill for their dinner twice. With AI, you can check every invoice for risk factors and flag anything fishy for auditor review. 

5. Catch double payments

Invoices often get held up — maybe an approver is out of office or the invoice failed a three-way match. In the meantime, the vendor follows up and someone else intervenes to pay the invoice out manually without noting it in the system. Afterward, the system clears the hold and the invoice is paid yet again. This double payment happens more than you might expect and often no one catches it (after all, who is going to complain about receiving extra money?). AI helps prevent this problem by keeping track of all spend and always checking for duplicates. 

6. Audit before you pay

Once a payment is out in the world, it can be difficult if not impossible to get it back — even if you later prove that the charge was erroneous or fraudulent. Even if you are able to recover it, doing so takes up valuable time and there’s a significant disadvantage to not having the cash on hand for your business. AI makes it possible to audit all spend before you pay, rendering this problem moot. 

Want to save money with AI? Download our new ebook, Artificial Intelligence in Spend Auditing For Dummies, to learn more about how artificial intelligence can help you team. 

This article was originally published on the AppZen Blog

Josephine McCann is a Product Marketing Manager at AppZen, where she loves crafting content and telling interesting stories.

Toyota,prototype,future,city,Mt. Fuji,Japan,artificial intelligence,infrastructure,Woven City,Akio Toyoda,commercial,academic partners,scientists,Bjarke Ingels,Danish,architect,ces,vegas,Vaughn Lowery,360 magazine,design,art,ai,tech,app,google,

TOYOTA – WOVEN CITY

At CES, Toyota revealed plans to build a prototype “city” of the future on a 175-acre site at the base of Mt. Fuji in Japan.

Called the Woven City, it will be a fully connected ecosystem powered by hydrogen fuel cells.

Envisioned as a “living laboratory,” the Woven City will serve as a home to full- time residents and researchers who will be able to test and develop technologies such as autonomy, robotics, personal mobility, smart homes and artificial intelligence in a real-world environment.

“Building a complete city from the ground up, even on a small scale like this, is a unique opportunity to develop future technologies, including a digital operating system for the city’s infrastructure. With people, buildings and vehicles all connected and communicating with each other through data and sensors, we will be able to test connected AI technology… in both the virtual and the physical realms … maximizing its potential,” said Akio Toyoda, president, Toyota Motor Corporation.

Toyota will extend an open invitation to collaborate with other commercial and academic partners and invite interested scientists and researchers from around the world to come work on their own projects in this one-of-a-kind, real-world incubator.

“We welcome all those inspired to improve the way we live in the future, to take advantage of this unique research ecosystem and join us in our quest to create an ever-better way of life and mobility for all,” said Akio Toyoda, president, Toyota Motor Corporation.

For the design of Woven City, Toyota has commissioned Danish architect, Bjarke Ingels, CEO, Bjarke Ingels Group (BIG). His team at BIG have designed many high-profile projects: from 2 World Trade Center in New York and Lego House in Denmark, to Google’s Mountain View and London headquarters.

“A swarm of different technologies are beginning to radically change how we inhabit and navigate our cities. Connected, autonomous, emission-free and shared mobility solutions are bound to unleash a world of opportunities for new forms of urban life. With the breadth of technologies and industries that we have been able to access and collaborate with from the Toyota ecosystem of companies, we believe we have a unique opportunity to explore new forms of urbanity with the Woven City that could pave new paths for other cities to explore.” Bjarke Ingels, Founder and Creative Director, BIG.

Design of the City

The masterplan of the city includes the designations for street usage into three types: for faster vehicles only, for a mix of lower speed, personal mobility and pedestrians, and for a park-like promenade for pedestrians only.  These three street types weave together to form an organic grid pattern to help accelerate the testing of autonomy.

The city is planned to be fully sustainable, with buildings made mostly of wood to minimize the carbon footprint, using traditional Japanese wood joinery, combined with robotic production methods. The rooftops will be covered in photo-voltaic panels to generate solar power in addition to power generated by hydrogen fuel cells.   Toyota plans to weave in the outdoors throughout the city, with native vegetation and hydroponics.

Residences will be equipped with the latest in human support technologies, such as in-home robotics to assist with daily living. The homes will use sensor-based AI to check occupants’ health, take care of basic needs and enhance daily life, creating an opportunity to deploy connected technology with integrity and trust, securely and positively.

To move residents through the city, only fully-autonomous, zero-emission vehicles will be allowed on the main thoroughfares. In and throughout Woven City, autonomous Toyota e-Palettes will be used for transportation and deliveries, as well as for changeable mobile retail.

Both neighborhood parks and a large central park for recreation, as well as a central plaza for social gatherings, are designed to bring the community together. Toyota believes that encouraging human connection will be an equally important aspect of this experience.

Toyota plans to populate Woven City with Toyota Motor Corporation employees and their families, retired couples, retailers, visiting scientists, and industry partners. The plan is for 2000 people to start, adding more as the project evolves.

The groundbreaking for the site is planned for early 2021.

Interested in partnering with Toyota on the development of Woven City? Visit: Woven-city.global