Stuff South Africa https://stuff.co.za South Africa's Technology News Hub Fri, 15 Mar 2024 13:48:28 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 Stuff South Africa South Africa's Technology News Hub clean Undersea cables for Africa’s internet retrace history and leave digital gaps as they connect continents https://stuff.co.za/2024/03/17/undersea-cables-for-africa-internet-history/ https://stuff.co.za/2024/03/17/undersea-cables-for-africa-internet-history/#respond Sun, 17 Mar 2024 12:00:25 +0000 https://stuff.co.za/?p=190876 Large parts of west and central Africa, as well as some countries in the south of the continent, were left without internet services on 14 March because of failures on four of the fibre optic cables that run below the world’s oceans. Nigeria, Côte d’Ivoire, Liberia, Ghana, Burkina Faso and South Africa were among the worst affected. By midday on 15 March the problem had not been resolved. Microsoft warned its customers that there was a delay in repairing the cables. South Africa’s News24 reported that, while the cause of the damage had not been confirmed, it was believed that “the cables snapped in shallow waters near the Ivory Coast, where fishing vessels are likely to operate”.

Jess Auerbach Jahajeeah, an associate professor at the University of Cape Town’s Graduate School of Business, is currently writing a book on fibre optic cables and digital connectivity. She spent time in late 2023 aboard the ship whose crew is responsible for maintaining most of Africa’s undersea network. She spoke to The Conversation Africa about the importance of these cables.

1. What’s the geographical extent of Africa’s current undersea network?

Fibre optic cables now literally encircle Africa, though some parts of the continent are far better connected than others. This is because both public and private organisations have made major investments in the past ten years.

Based on an interactive map of fibre optic cables, it’s clear that South Africa is in a relatively good position. When the breakages happened, the network was affected for a few hours before the internet traffic was rerouted; a technical process that depends both on there being alternative routes available and corporate agreements in place to enable the rerouting. It’s the same as driving using a tool like Google Maps. If there’s an accident on the road it finds another way to get you to your destination.

But, in several African countries – including Sierra Leone and Liberia – most of the cables don’t have spurs (the equivalent of off-ramps on the road), so only one fibre optic cable actually comes into the country. Internet traffic from these countries basically stops when the cable breaks.

Naturally that has huge implications for every aspect of life, business and even politics. Whilst some communication can be rerouted via satellites, satellite traffic accounts for only about 1% of digital transmissions globally. Even with interventions such as satellite-internet distribution service Starlink it’s still much slower and much more expensive than the connection provided by undersea cables.

Basically all internet for regular people relies on fibre optic cables. Even landlocked countries rely on the network, because they have agreements with countries with landing stations – highly-secured buildings close to the ocean where the cable comes up from underground and is plugged into terrestrial systems. For example southern Africa’s internet comes largely through connections in Melkbosstrand, just outside Cape Town, and Mtunzini in northern KwaZulu-Natal, both in South Africa. Then it’s routed overland to various neighbours.

Each fibre optic cable is extremely expensive to build and to maintain. Depending on the technical specifications (cables can have more or fewer fibre threads and enable different speeds for digital traffic) there are complex legal agreements in place for who is responsible for which aspects of maintenance.

2. What prompted you to write a book about the social history of fibre optic cables in Africa?

I first visited Angola in 2011 to start work for my PhD project. The internet was all but non-existent – sending an email took several minutes at the time. Then I went back in 2013, after the South Atlantic Cable System went into operation. It made an incredible difference: suddenly Angola’s digital ecosystem was up and running and everybody was online.

At the time I was working on social mobility and how people in Angola were improving their lives after a long war. Unsurprisingly, having digital access made all sorts of things possible that simply weren’t imaginable before. I picked up my interest again once I was professionally established, and am now writing it up as a book, Capricious Connections. The title refers to the fact that the cables wouldn’t do anything if it wasn’t for the infrastructure that they plug into at various points.

Landing centres such as Sangano in Angola are fascinating both because of what they do technically (connecting and routing internet traffic all over the country) and because they often highlight the complexities of the digital divide.

For example, Sangano is a remarkable high-tech facility run by an incredibly competent and socially engaged company, Angola Cables. Yet the school a few hundred metres from the landing station still doesn’t have electricity.

When we think about the digital divide in Africa, that’s often still the reality: you can bring internet everywhere but if there’s no infrastructure, skills or frameworks to make it accessible, it can remain something abstract even for those who live right beside it.

In terms of history, fibre optic cables follow all sorts of fascinating global precedents. The 2012 cable that connected one side of the Atlantic Ocean to the other is laid almost exactly over the route of the transatlantic slave trade, for example. Much of the basic cable map is layered over the routes of the copper telegraph network that was essential for the British empire in the 1800s.

Most of Africa’s cables are maintained at sea by the remarkable crew of the ship Léon Thévenin. I joined them in late 2023 during a repair operation off the coast of Ghana. These are uniquely skilled artisans and technicians who retrieve and repair cables, sometimes from depths of multiple kilometres under the ocean.

When I spent time with the crew last year, they recounted once accidentally retrieving a section of Victorian-era cable when they were trying to “catch” a much more recent fibre optic line. (Cables are retrieved in many ways; one way is with a grapnel-like hook that is dragged along the ocean bed in roughly the right location until it snags the cable.)

There are some very interesting questions emerging now about what is commonly called digital colonialism. In an environment where data is often referred to with terms like “the new oil”, we’re seeing an important change in digital infrastructure.

Previously cables were usually financed by a combination of public and private sector partnerships, but now big private companies such as Alphabet, Meta and Huawei are increasingly financing cable infrastructure. That has serious implications for control and monitoring of digital infrastructure.

Given we all depend so much on digital tools, poorer countries often have little choice but to accept the terms and conditions of wealthy corporate entities. That’s potentially incredibly dangerous for African digital sovereignty, and is something we should be seeing a lot more public conversation about.


]]>
https://stuff.co.za/2024/03/17/undersea-cables-for-africa-internet-history/feed/ 0
Google’s Gemini showcases more powerful technology, but we’re still not close to superhuman AI https://stuff.co.za/2024/03/15/google-gemini-showcases-powerful-technology/ https://stuff.co.za/2024/03/15/google-gemini-showcases-powerful-technology/#respond Fri, 15 Mar 2024 07:16:26 +0000 https://stuff.co.za/?p=190826 In December 2023, Google announced the launch of its new large language model (LLM) named Gemini. Gemini now provides the artificial intelligence (AI) foundations of Google products; it is also a direct rival to OpenAI’s GPT-4.

But why is Google considering Gemini as such an important milestone, and what does this mean for users of Google’s services? And generally speaking, what does it mean in the context of the current hyperfast-paced developments of AI?

AI everywhere

Google is betting on Gemini to transform most of its products by enhancing current functionalities and creating new ones for services such as search, Gmail, YouTube and its office productivity suite. This would also allow improvements to their online advertising business — their main source of revenue — as well as for Android phone software, with trimmed versions of Gemini running on limited capacity hardware.

For users, Gemini means new features and improved capacities that would make Google services harder to shun, strengthening an already dominant position in areas such as search engines. The potential and opportunities for Google are considerable, given the bulk of their software is easily upgradable cloud services.

But the huge and unexpected success of ChatGPT attracted a lot of attention and enhanced the credibility of OpenAI. Gemini will allow Google to reinstate itself as a major player in AI in the public view. Google is a powerhouse in AI, with large and strong research teams at the origin of many major advances of the last decade.

There is public discussion about these new technologies, both on the benefits they provide and the disruption they create in fields such as education, design and health care.

Strengthening AI

At its core, Gemini relies on transformer networks. Originally devised by a research team at Google, the same technology is used to power other LLMs such as GPT-4.

A distinctive element of Gemini is its capacity to deal with different data modalities: text, audio, image and video. This provides the AI model with the capacity to execute tasks over several modalities, like answering questions regarding the content of an image or conducting a keyword search on specific types of content discussed in podcasts.

But more importantly, that the models can handle distinct modalities enables the training of globally superior AI models, compared to distinct models trained independently for each modality. Indeed, such multimodal models are deemed to be stronger since they are exposed to different perspectives of the same concepts.

For example, the concept of birds may be better understood through learning from a mix of birds’ textual descriptions, vocalizations, images and videos. This idea of multimodal transformer models has been explored in previous research at Google, Gemini being the first full-fledged commercial implementation of the approach.

Such a model is seen as a step in the direction of stronger generalist AI models, also known as artificial general intelligence (AGI).

Risks of AGI

Given the rate at which AI is advancing, the expectations that AGI with superhuman capabilities will be designed in the near future generates discussions in the research community and more broadly in the society.

On one hand, some anticipate the risk of catastrophic events if a powerful AGI falls into the hands of ill-intentioned groups, and request that developments be slowed down.

Others claim that we are still very far from such actionable AGI, that the current approaches allow for a shallow modelling of intelligence, mimicking the data on which they are trained, and lack an effective world model — a detailed understanding of actual reality — required to achieve human-level intelligence.

On the other hand, one could argue that focusing the conversation on existential risk is distracting attention from more immediate impacts brought on by recent advances of AI, including perpetuating biases, producing incorrect and misleading content — prompting Google to pause its Gemini image generatorincreasing environmental impacts and enforcing the dominance of Big Tech.


Read More: Google Gemini replaces Bard as catch-all AI platform


The line to follow lies somewhere in between all of these considerations. We are still far from the advent of actionable AGI — additional breakthroughs are required, including introducing stronger capacities for symbolic modelling and reasoning.

In the meantime, we should not be distracted from the important ethical and societal impacts of modern AI. These considerations are important and should be addressed by people with diverse expertise, spanning technological and social science backgrounds.

Nevertheless, although this is not a short-term threat, achieving AI with superhuman capacity is a matter of concern. It is important that we, collectively, become ready to responsibly manage the emergence of AGI when this significant milestone is reached.


]]>
https://stuff.co.za/2024/03/15/google-gemini-showcases-powerful-technology/feed/ 0 The capabilities of multimodal AI | Gemini Demo nonadult
Google is tidying up the “spammy, low-quality content” on Search https://stuff.co.za/2024/03/06/google-tidying-spammy-low-quality-search/ https://stuff.co.za/2024/03/06/google-tidying-spammy-low-quality-search/#respond Wed, 06 Mar 2024 09:23:36 +0000 https://stuff.co.za/?p=190483 Is it just us, or has Google Search been slacking lately? Its usefulness is waning and we think it may have something to do with this AI-ridden era of the internet. “…Spammy, low-quality content,” as Google calls it, is plugging up Search and taking the spotlight off the ‘useful’ results. Google wants to do something about it. The search giant just announced “key changes” to “improve the quality of Search and the helpfulness of your results.”

Room for refining

Google Search changes intext1 (Google)

One of the ways it’ll be doing so is by “refining some of [its] core ranking systems” to get a better sense of when web pages feature poor user experiences, downright unhelpful, or “feel like they were created for search engines instead of people.” The big idea here is for Search to sift through the nonsense, bringing the most helpful information to the surface, simultaneously burying unoriginal and unhelpful content.

It’s specifically looking to clear out those results designed to game the SEO (search engine optimisation) at scale — especially where automation might be involved. “This could include sites created primarily to match very specific search queries,” it said.

“We believe these updates will reduce the amount of low-quality content on Search and send more traffic to helpful and high-quality sites. Based on our evaluations, we expect that the combination of this update and our previous efforts will collectively reduce low-quality, unoriginal content in search results by 40%,” the king of Search said.

Google’s announcement may not mention generative AI specifically, but it is a concern that’s being addressed, according to a Google spokesperson speaking with Gizmodo. The changes target “low-quality AI-generated content that’s designed to attract clicks, but that doesn’t add much original value.”

Google reckons it’s dealing with a “more complex” update than usual and changes could take up to a month to begin rolling out.


Read More: Google recognises South Africa as it launches its first Cloud region in Joburg


Spammers Paradise no more

Another change tackles spam, with more content being considered worthy of being on that list. It’s updating its spam policies to “better address new and evolving abusive practices that lead to unoriginal, low-quality content showing up on Search,” starting today.

“Today, scaled content creation methods are more sophisticated, and whether content is created purely through automation isn’t always as clear,” it said. “…we’re strengthening our policy to focus on this abusive behavior — producing content at scale to boost search ranking — whether automation, humans or a combination are involved. This will allow us to take action on more types of content with little to no value created at scale, like pages that pretend to have answers to popular searches but fail to deliver helpful content.”

Part of those changes involves stemming the flow of low-quality third-party intent on “capitalizing on the hosting site’s strong reputation” that might usually contain “great content.” Google mentions how a third-party producer might publish a payday loan review article on a trusted education website to “gain ranking benefits from the site.”

Starting 5 May, Google will consider this sort of result ‘spam’ and it’s giving affected sites time to make changes.

]]>
https://stuff.co.za/2024/03/06/google-tidying-spammy-low-quality-search/feed/ 0
Show us the money, Google https://stuff.co.za/2024/03/06/show-us-the-money-google/ https://stuff.co.za/2024/03/06/show-us-the-money-google/#respond Wed, 06 Mar 2024 09:21:16 +0000 https://stuff.co.za/?p=190477 Google and Facebook owe United States news publishers between $11-billion and $14-billion a year, according to new research.

“The tech giants have argued that news is not essential and that publishers are lucky to have their platforms driving traffic to their sites, which can then convert that traffic into subscriptions,” writes Haaris Mateen, an assistant professor at the University of Houston, and Anya Schiffrin, a senior lecturer in Discipline of International and Public Affairs at Columbia University.

But their study finds that “news is important to Big Tech platforms” even if value is created for both sides.

Big tech companies have “resisted paying traditional licensing and copyright fees” and are not forthcoming about providing audience traffic and impression numbers. What payments they make are “meagre” and often through small grants or private arrangements with major outlets, the academics found.

“Unsurprisingly, by keeping the cost of goods sold (news) down, Google and Meta have grown rich off the advertising revenue they reap from attracting the world’s eyeballs to their sites.”

“Meanwhile, news deserts have become a global problem as outlets struggle with the loss of revenue, although some – like The New York Times and The Guardian – have been able to offset the losses with subscriptions and other income.”

In South Africa, publisher Caxton with the Centre for Free Expression has asked Google to “provide transparent answers to a list of well-considered questions,” Caxton chairman Paul Jenkins tells the FM. These are the “very questions which media around the world seek answers to and yet we as the media face the byzantine maze of confidentiality protection that secretive organisations such as Google hide behind”.

This aims at “redressing of the disproportionate power of digital advertising platforms over the news industry”.

Caxton, like other media organisations in South Africa, has been wrestling with the digital revolution for nearly 25 years, he adds, and in that time the “behemoths of the digital world” have come to dominate the industry. The average American now spends seven hours a day on a screen, he says.

“Our ability as news organisations to hold government to account and report on society has never been more under threat. Journalism is in danger – and digital advertising follows eyeballs and does not discriminate between clickbait, fake news and cutting-edge journalism.”

“It is not melodramatic to say news as a public good and freedom of expression is at a tipping point, not unlike the climate crisis. But the blame game and finger-pointing are unhelpful,” Jenkins adds.

“News publishers all over the world have tried to estimate what Google and Meta owe them for the news they distribute to audiences. This is a difficult task due to a lack of publicly available data about audience behaviour and because a lack of competition makes the price tech companies pay for news artificially low,” Mateen and Schiffrin concur.

The academics have created a methodology they say is “transparent and replicable,” having used insights from over 50 years of research in the economics of bargaining to find the “fair” payment for news.

The “methodology offers the flexibility to change underlying assumptions based on the market and geography being analysed”.

It’s also important that publishers stick together as they negotiate, they say, as “more value is created when bargaining is collective”.

Australia’s News Media Bargaining Code, which was enacted in 2021, is a good template, they argue, and has forced Google and Meta to strike deals with Australian media organisations, resulting in payments of A$200-million a year.

“It’s no surprise other governments are looking at Australia’s law to find ways to get payments for their news too,” say Mateen and Schiffrin.


Read More: What’s wrong with programmatic advertising – and why you should never trust Google or Facebook again


Other countries considering similar laws are Indonesia, New Zealand, South Africa and Switzerland, they add. Japan has done its own study and “warned tech platforms [that] low payments to publishers could violate antimonopoly laws”.

Caxton and the Centre for Free Expression want to “use our generous South African constitution to protect our rights to freedom of expression and information, and to provide us with access to the data we need to protect these rights,” Jenkins tells the FM.

“We don’t accept that the trope of commercial confidentiality is an excuse for secrecy”.


]]>
https://stuff.co.za/2024/03/06/show-us-the-money-google/feed/ 0
Facebook is a “product that’s killing people” https://stuff.co.za/2024/02/22/facebook-is-a-product-thats-killing-people/ Thu, 22 Feb 2024 13:00:30 +0000 https://stuff.co.za/?p=190122 Facebook CEO, Mark Zuckerberg, was roasted by US legislators in January over Instagram’s rampant sexual abuse problem. “You have blood on your hands”, senator Lindsey Graham told him during a hearing of the Senate judiciary committee. “You have a product that’s killing people.”

Also in the room, seated behind Zuckerberg, were the parents of children who killed themselves or committed self-harm after being exposed online to unwanted sexual advances. 

The Senate hearing comes on top of a lawsuit brought against Meta by the New Mexico attorney-general in December last year that has revealed e-mails and other internal documents in which Meta executives acknowledge the scale of the abuse.

The documents show that by Meta’s own count, as many as 100,000 children experience sexual harassment on Instagram and Facebook every day. 

The documents indicate company staff were aware that the Facebook Messenger feature was being used “to co-ordinate trafficking activities”. “Every human exploitation stage (recruitment, coordination, exploitation) is represented on our platform,” one document says. 

But company executives resisted scanning Messenger for harmful content — among the Meta documents is a 2017 e-mail that said doing so would put the company at “a competitive disadvantage vs other apps who might offer more privacy”. 

The documents refer to the sexual harassment of the 12-year-old daughter of an Apple executive via Instagram’s direct message feature. “This is the kind of thing that pisses Apple off to the extent of threatening to remove us from the App Store,” a Meta employee said in an e-mail.

The New Mexico lawsuit followed a two-year investigation by The Guardian, published in April last year, which found that Meta messaging services were used by criminals “to buy and sell children for sex”. The exposé quoted a 2020 report by the Human Trafficking Institute that Facebook was the platform most used to groom and recruit children, followed by Instagram and Snapchat.

“We’re seeing more and more people with significant criminal records move into this area,” former Boston Assistant District Attorney Luke Goldworm told the paper. Victims were often as young as 11 or 12, and a pimp could make up to $1,000 a night. In the four years up to October 2022, cases of social media child trafficking handled by his department rose 30% a year. 

Zuckerberg, appearing last month for the eighth time on the Hill, was joined at the Senate hearing by the CEOs of X (Linda Yaccarino), Snap (Evan Spiegel), Discord (Jason Citron) and TikTok (Shou Zi Chew). 

The hearing’s chair, Dick Durbin, said: “Discord has been used to groom, abduct, and abuse children. Meta’s Instagram helped connect and promote a network of paedophiles, Snapchat’s disappearing messages have been co-opted by criminals who financially sextort young victims.”

“Their design choices, their failures to adequately invest in trust and safety, and their constant pursuit of engagement and profit over basic safety have all put our kids and grandkids at risk,” he said.

Durbin showed a video of online child sexual victims relating their horror stories. “I was sexually exploited on Facebook,” one victim said, while another added: “I was sexually exploited on Instagram.”

As Graham said: “These companies must be reined in, or the worst is yet to come.”

Senator Ted Cruz pointed out that Instagram warns users that they might see child sexual abuse material but asks if they would like to “see the results anyway”.

“Mr Zuckerberg, what the hell were you thinking?” Cruz asked him. Zuckerberg replied: “Basic science behind that … [is] it’s often helpful to, rather than just blocking it, to help direct them towards something that could be helpful.”


Read More: Facebook lied: it knew teens were in danger


The Facebook CEO, who has previously argued that Holocaust denialists aren’t “intentionally getting it wrong”, told Cruz he would “personally look into it”.

But Zuckerberg was personally informed about the problem in 2021 by Facebook engineer Arturo Béjar, who also e-mailed Sheryl Sandberg, the then COO; Chris Cox, Facebook’s then chief of product; and Adam Mosseri, head of Instagram. 

Zuckerberg did not respond, Béjar told a Senate judiciary subcommittee hearing last year, adding that his own teenage daughter was sexually harassed on Instagram. “She and her friends began having awful experiences, including repeated unwanted sexual advances, harassment,” Béjar said in November. “She reported these incidents to the company and it did nothing.”

Zuckerberg told last month’s hearing that “the existing body of scientific work has not shown a causal link between using social media and young people having worse mental health”.

However, Instagram head of policy Karina Newton e-mailed in May 2021 that “it’s not ‘regulators’ or ‘critics’ who think Instagram is unhealthy for young teens — it’s everyone from researchers and academic experts to parents. The blueprint of the app is inherently not designed for an age group that don’t have the same cognitive and emotional skills that older teens do.”

As Graham said, echoing what most parents feel: “If you’re waiting on these guys to solve the problem, we’re gonna die waiting.”


This column first appeared in the Financial Mail

]]>
AI skills are becoming the new workplace currency https://stuff.co.za/2024/02/20/skills-are-the-new-workplace-currency-ai/ Tue, 20 Feb 2024 11:11:10 +0000 https://stuff.co.za/?p=189872 “If you made a movie about AI now, it would be called ‘Everything, Everywhere, All at Once’,” joked Jens-Hendrik Jeppesen, Workday’s senior director for corporate affairs for Europe, the Middle East and Africa (EMEA), referring to the Oscar-winning film.

Jeppesen captures the current zeitgeist around artificial intelligence, which was a low-key tech industry buzzword for years before OpenAI’s ChatGPT turned it into the mainstream in November 2022.

Now, AI is at its peak of what Gartner calls the “hype cycle” – which most of us remember about the first year of the so-called fourth industrial revolution (4IR). Unlike that now-faded fad, as well as Facebook’s costly $26-billion bet on its metaverse VR world, AI is gaining in popularity and – perhaps unexpectedly – usefulness.

The biggest threat initially seemed to be human jobs, but that fearmongering has dissipated as the potential for AI to upskill employees for these new ways of working has emerged.

“Every business nowadays is a talent business,” Workday co-president Sayan Chakraborty tells the FM.


Read More: AI: the silent partner in your side hustle


Globally there is a shift towards a “skills-based methodology, as opposed to credentials or traditional-based ways of hiring people,” he says, which AI can help to develop. This is even more “salient” for new workers coming into the workforce, he adds, who will have “grown up with ChatGPT,” he told the FM at Workday’s Rising conference in Barcelona last year.

As Accenture CEO Julie Sweet said at the World Economic Forum last month, the consultancy firm now has 12 jobs in its technology department that didn’t exist a year ago, including a prompt engineer for writing the complicated prompts needed to get meaningful responses from a generative AI service.

“Many of the jobs that are being created are definitely highly skilled jobs,” she told Yahoo Finance. The “big challenge today” in being successful with using artificial intelligence is “actually going from the cool demos to operationalising it, and talent is front and centre”.

Chakraborty believes AI and specialisation will be inextricably linked going forward. “We see that at Workday in our skills cloud, which is an AI-generated ontology of skills,” he explains. When new projects are conceived, the HR software firm searches its own skills database for who can be a contributor.

“The companies producing human-relevant data are going to become increasingly relevant in future,” says Chakraborty, who sits on the United States National Artificial Intelligence Advisory Committee, which advises the president on AI-related policy issues.

“You have to have a partner with you on the journey that is going to support the business you are going to be, and not just the business you are,” he presciently adds.

Workday itself has evolved from a service to a platform, says its co-CEO Carl Eschenbach. As human capital has become increasingly central in business, and software has evolved, “you are no longer doing financials and HR separately,” he tells the FM. They both have “scope creep”.

Workday is arguing that it makes more sense to incorporate a company’s financial services into its HR software. With its strong HR reputation, it wants to convince chief financial officers (CFOs), that it is a better bet – especially given this increasing focus on skills.

Luckily, says Tim Wakeford, Workday’s vice-president of financials product strategy, the HR industry was “much more conformable moving to cloud” while it sees a larger addressable market through the CFOs it already deals with. “We are better penetrated in HR than finance,” he told the FM.


Read More: The New York Times’ AI copyright lawsuit shows that forgiveness might not be better than permission


Before he sold his startup to Workday, Chakraborty studied at the prestigious Massachusetts Institute of Technology (MIT), from which he has both a Master’s and bachelor of science degree in aerospace engineering. He then worked at NASA’s famed Jet Propulsion Laboratory as an engineer on interplanetary spacecraft; and later on the early commercialisation of global positioning systems (GPS). He was also vice president of software development at Oracle.

On his desk at work, he has a prized Apollo 13 medallion, commemorating the infamous explosion during the 1970 space mission, and equally amazing survival of the crew.

“The Apollo 13 represents that technology is great,” he tells the FM. Check out my podcast interview here.

“When that tank exploded in the service module in Apollo 13, and they went to the books and said, ‘what do you do next,’ there wasn’t an answer. Because no one had foreseen that as survivable.”

But, he beams as he explains, “What happened was humans used the technology and accomplished something extraordinary. The reason I keep that medal on my desk is always to remind me of what humans are capable of with technology in their service.”


]]>
Cybercriminals are creating their own AI chatbots to support hacking and scam users https://stuff.co.za/2024/02/17/cybercriminals-are-creating-own-ai-chatbots/ Sat, 17 Feb 2024 12:00:36 +0000 https://stuff.co.za/?p=189784 Artificial intelligence (AI) tools aimed at the general public, such as ChatGPT, Bard, CoPilot and Dall-E have incredible potential to be used for good.

The benefits range from an enhanced ability by doctors to diagnose disease, to expanding access to professional and academic expertise. But those with criminal intentions could also exploit and subvert these technologies, posing a threat to ordinary citizens.

Criminals are even creating their own AI chatbots, to support hacking and scams.

AI’s potential for wide-ranging risks and threats is underlined by the publication of the UK government’s Generative AI Framework and the National Cyber Security Centre’s guidance on the potential impacts of AI on online threats.

There are an increasing variety of ways that generative AI systems like ChatGPT and Dall-E can be used by criminals. Because of ChatGPT’s ability to create tailored content based on a few simple prompts, one potential way it could be exploited by criminals is in crafting convincing scams and phishing messages.

A scammer could, for instance, put some basic information –- your name, gender and job title -– into a large language model (LLM), the technology behind AI chatbots like ChatGPT, and use it to craft a phishing message tailored just for you. This has been reported to be possible, even though mechanisms have been implemented to prevent it.

LLMs also make it feasible to conduct large-scale phishing scams, targeting thousands of people in their own native language. It’s not conjecture either. Analysis of underground hacking communities has uncovered a variety of instances of criminals using ChatGPT, including for fraud and creating software to steal information. In another case, it was used to create ransomware.

Malicious chatbots

Entire malicious variants of large language models are also emerging. WormGPT and FraudGPT are two such examples that can create malware, find security vulnerabilities in systems, advise on ways to scam people, support hacking and compromise people’s electronic devices.

Love-GPT is one of the newer variants and is used in romance scams. It has been used to create fake dating profiles capable of chatting to unsuspecting victims on Tinder, Bumble, and other apps.

As a result of these threats, Europol has issued a press release about criminals’ use of LLMs. The US CISA security agency has also warned about generative AI’s potential effect on the upcoming US presidential elections.

Privacy and trust are always at risk as we use ChatGPT, CoPilot and other platforms. As more people look to take advantage of AI tools, there is a high likelihood that personal and confidential corporate information will be shared. This is a risk because LLMs usually use any data input as part of their future training dataset, and second, if they are compromised, they may share that confidential data with others.

Leaky ship

Research has already demonstrated the feasibility of ChatGPT leaking a user’s conversations and exposing the data used to train the model behind it – sometimes, with simple techniques.

In a surprisingly effective attack, researchers were able to use the prompt, “Repeat the word ‘poem’ forever” to cause ChatGPT to inadvertently expose large amounts of training data, some of which was sensitive. These vulnerabilities place person’s privacy or a business’s most-prized data at risk.

More widely, this could contribute to a lack of trust in AI. Various companies, including Apple, Amazon and JP Morgan Chase, have already banned the use of ChatGPT as a precautionary measure.

ChatGPT and similar LLMs represent the latest advancements in AI and are freely available for anyone to use. It’s important that its users are aware of the risks and how they can use these technologies safely at home or at work. Here are some tips for staying safe.

Be more cautious with messages, videos, pictures and phone calls that appear to be legitimate as these may be generated by AI tools. Check with a second or known source to be sure.

Avoid sharing sensitive or private information with ChatGPT and LLMs more generally. Also, remember that AI tools are not perfect and may provide inaccurate responses. Keep this in mind particularly when considering their use in medical diagnoseswork and other areas of life.

You should also check with your employer before using AI technologies in your job. There may be specific rules around their use, or they may not be allowed at all. As technology advances apace, we can at least use some sensible precautions to protect against the threats we know about and those yet to come.


]]>
Google Gemini replaces Bard as catch-all AI platform https://stuff.co.za/2024/02/09/google-gemini-replaces-bard/ Fri, 09 Feb 2024 10:16:24 +0000 https://stuff.co.za/?p=189438 Google announced on Thursday that it is consolidating its AI offerings by folding everything AI-related that it currently offers into the Google Gemini brand. It also announced a new Android app and overpriced Google One subscription tier, a year after Bard sang its first ballad.

If you can remember as far back as February last year, Google Bard’s launch came hot on the heels of Microsoft’s Copilot launch. That celebrated its first birthday this week with a redesign and Super Bowl ad, now it’s Google’s turn.

Google Gemini

Included in Google Gemini is The Chatbot Formerly Known as Bard, Google’s Duet AI features aimed at developers, and Gemini Ultra 1.0 — the new version of the company’s large language model (LLM).

For most folks, the easiest way to experience Gemini will be through mobile apps — there’s a new Google Gemini app for Android while iPhone users will find Gemini in the Google app — but everyone outside the US will have to wait until next week for the wider rollout.

You won’t have to wait to start giving Google your money, however. The new ‘AI Premium’ tier of Google One is already available to South Africans for R430/m. This gives you 2TB of Google Drive storage, access to the Gemini Ultra 1.0 LLM, and, eventually, Gemini’s help in Google Workspace apps like Google Docs and Sheets.

That might sound like a lot of money but it’s roughly the same price as a ChatGPT Plus subscription. But Google Gemini is going to need more than a similarly priced subscription if it hopes to distinguish itself from the AI competition.

Source

]]>
Bard becomes Gemini | Ultra 1.0 and a new mobile app nonadult
Using AI to monitor the internet for terror content is inescapable – but also fraught with pitfalls https://stuff.co.za/2024/02/09/using-ai-to-monitor-the-internet-for-terror/ Fri, 09 Feb 2024 07:19:01 +0000 https://stuff.co.za/?p=189431 Every minute, millions of social media posts, photos and videos flood the internet. On average, Facebook users share 694,000 stories, X (formerly Twitter) users post 360,000 posts, Snapchat users send 2.7 million snaps and YouTube users upload more than 500 hours of video.

This vast ocean of online material needs to be constantly monitored for harmful or illegal content, like promoting terrorism and violence.

The sheer volume of content means that it’s not possible for people to inspect and check all of it manually, which is why automated tools, including artificial intelligence (AI), are essential. But such tools also have their limitations.

The concerted effort in recent years to develop tools for the identification and removal of online terrorist content has, in part, been fuelled by the emergence of new laws and regulations. This includes the EU’s terrorist content online regulation, which requires hosting service providers to remove terrorist content from their platform within one hour of receiving a removal order from a competent national authority.

Behaviour and content-based tools

In broad terms, there are two types of tools used to root out terrorist content. The first looks at certain account and message behaviour. This includes how old the account is, the use of trending or unrelated hashtags and abnormal posting volume.

In many ways, this is similar to spam detection, in that it does not pay attention to content, and is valuable for detecting the rapid dissemination of large volumes of content, which are often bot-driven.

The second type of tool is content-based. It focuses on linguistic characteristics, word use, images and web addresses. Automated content-based tools take one of two approaches.

1. Matching

The first approach is based on comparing new images or videos to an existing database of images and videos that have previously been identified as terrorist in nature. One challenge here is that terror groups are known to try and evade such methods by producing subtle variants of the same piece of content.

After the Christchurch terror attack in New Zealand in 2019, for example, hundreds of visually distinct versions of the livestream video of the atrocity were in circulation.

So, to combat this, matching-based tools generally use perceptual hashing rather than cryptographic hashing. Hashes are a bit like digital fingerprints, and cryptographic hashing acts like a secure, unique identity tag. Even changing a single pixel in an image drastically alters its fingerprint, preventing false matches.

Perceptual hashing, on the other hand, focuses on similarity. It overlooks minor changes like pixel colour adjustments, but identifies images with the same core content. This makes perceptual hashing more resilient to tiny alterations to a piece of content. But it also means that the hashes are not entirely random, and so could potentially be used to try and recreate the original image.

2. Classification

The second approach relies on classifying content. It uses machine learning and other forms of AI, such as natural language processing. To achieve this, the AI needs a lot of examples like texts labelled as terrorist content or not by human content moderators. By analysing these examples, the AI learns which features distinguish different types of content, allowing it to categorise new content on its own.

Once trained, the algorithms are then able to predict whether a new item of content belongs to one of the specified categories. These items may then be removed or flagged for human review.

This approach also faces challenges, however. Collecting and preparing a large dataset of terrorist content to train the algorithms is time-consuming and resource-intensive.

The training data may also become dated quickly, as terrorists make use of new terms and discuss new world events and current affairs. Algorithms also have difficulty understanding context, including subtlety and irony. They also lack cultural sensitivity, including variations in dialect and language use across different groups.

These limitations can have important offline effects. There have been documented failures to remove hate speech in countries such as Ethiopia and Romania, while free speech activists in countries such as EgyptSyria and Tunisia have reported having their content removed.

We still need human moderators

So, in spite of advances in AI, human input remains essential. It is important for maintaining databases and datasets, assessing content flagged for review and operating appeals processes for when decisions are challenged.

But this is demanding and draining work, and there have been damning reports regarding the working conditions of moderators, with many tech companies such as Meta outsourcing this work to third-party vendors.

To address this, we recommend the development of a set of minimum standards for those employing content moderators, including mental health provision. There is also potential to develop AI tools to safeguard the well-being of moderators. This would work, for example, by blurring out areas of images so that moderators can reach a decision without viewing disturbing content directly.

But at the same time, few, if any, platforms have the resources needed to develop automated content moderation tools and employ a sufficient number of human reviewers with the required expertise.

Many platforms have turned to off-the-shelf products. It is estimated that the content moderation solutions market will be worth $32bn by 2031.


Read More: AI: the silent partner in your side hustle


But caution is needed here. Third-party providers are not currently subject to the same level of oversight as tech platforms themselves. They may rely disproportionately on automated tools, with insufficient human input and a lack of transparency regarding the datasets used to train their algorithms.

So, collaborative initiatives between governments and the private sector are essential. For example, the EU-funded Tech Against Terrorism Europe project has developed valuable resources for tech companies. There are also examples of automated content moderation tools being made openly available like Meta’s Hasher-Matcher-Actioner, which companies can use to build their own database of hashed terrorist content.

International organisations, governments and tech platforms must prioritise the development of such collaborative resources. Without this, effectively addressing online terror content will remain elusive.


]]>
Microsoft Copilot celebrates 1st birthday with redesign on web and mobile https://stuff.co.za/2024/02/08/microsoft-copilot-1st-birthday-redesign/ Thu, 08 Feb 2024 08:56:21 +0000 https://stuff.co.za/?p=189369 What better way to celebrate your first birthday than with a fresh coat of paint? Well, we could think of a few other ideas but Microsoft has gone with the paint option for Copilot.

Launched a year ago, Microsoft says Copilot’s web interface and mobile app (available on iOS and Android) now have “a more streamlined look and feel” which will supposedly make it easier to “bring your ideas to life” and “gain understanding about the world”. There’s also a “fun” new set of suggested prompts because some people need a little help imagining things.

Oh, and there’s a new Copilot Super Bowl ad

As luck would have it, this “significant new update” doesn’t only mark Copilot’s first birthday but also happily lands a few days before Super Bowl Sunday, the world’s biggest (and only) American football championship game. We don’t need to tell you it’s a big deal if you’re in America. For everyone else, the halftime show sometimes has a few good ads… we guess.

The new paint job and ad aren’t the only changes to emerge. Microsoft has also improved the platform’s image-editing and creation feature called ‘Designer in Copilot’. Free users can now edit their generated images inline without leaving the chat.

Those who cough up some cash for the Copilot Pro subscription can also resize or regenerate images. Finally, Microsoft also mentioned something called ‘Designer GPT’ that will roll out soon and provide a “dedicated canvas” within the platform so you can “visualize your ideas”.

Sounds riveting. Here’s the ad.

]]>
Microsoft Game Day Commercial | Copilot: Your everyday AI companion nonadult