• Follow us on Twitter @buckeyeplanet and @bp_recruiting, like us on Facebook! Enjoy a post or article, recommend it to others! BP is only as strong as its community, and we only promote by word of mouth, so share away!
  • Consider registering! Fewer and higher quality ads, no emails you don't want, access to all the forums, download game torrents, private messages, polls, Sportsbook, etc. Even if you just want to lurk, there are a lot of good reasons to register!


Elon Musk’s brain implant startup Neuralink denies that researchers abused monkeys
It says a complaint filed with the USDA was ‘misleading’
By Adi Robertson@thedextriarchy Feb 15, 2022, 12:34pm EST
https://www.theverge.com/2022/2/15/...es-monkey-lab-animal-abuse-complaint-uc-davis
Elon Musk’s company Neuralink has denied claims that university researchers abused monkeys in experiments backed by the brain-computer interface startup. In a statement posted online, Neuralink responded to a federal complaint from the nonprofit Physicians Committee for Responsible Medicine (PCRM), which alleged that Neuralink and its partners at the University of California, Davis conducted inhumane tests on animals.

A PCRM statement said that monkeys at UC Davis “had their brains mutilated in shoddy experiments and were left to suffer and die.” Neuralink, by contrast, says that the lab “did and continue to meet federally mandated standards,” although it has since moved the animals to an in-house facility.

The PCRM complaint, filed with the US Department of Agriculture (USDA) against UC Davis last week, is based on documents released after a public records lawsuit. The documents outline a partnership that provided the university with around $1.4 million and ran between 2017 and 2020. The researchers tested an implant “approximately the size of a quarter coin” that was anchored to the skull of macaque monkey test subjects.

The nonprofit — which opposes the use of animals in medical experiments — says that the team “failed to provide dying monkeys with adequate veterinary care, used an unapproved substance known as ‘BioGlue’ that killed monkeys by destroying portions of their brains, and failed to provide for the psychological well-being of monkeys assigned to the experiment.”

A tweet from Neuralink calls that description “misleading” and lacking context. It says that several animals with a “wide range of pre-existing conditions unrelated to our research” were euthanized so that researchers could practice the implant surgery on cadavers, and six more were euthanized because of infections related to the implant or a complication involving BioGlue, a widely used surgical adhesive. (An internal email references necropsy reports for 23 animals in total, plus 10 living test subjects that were either shipped to Neuralink or removed from the project.)

“All animal work done at UC Davis was approved by their Institutional Animal Care and Use Committee (IACUC) as mandated by federal law, and all medical and post-surgical support, including endpoint decisions, were overseen by their dedicated and skilled veterinary staff,” Neuralink says.

Neuralink called itself “absolutely committed to working with animals in the most humane and ethical way possible.” The company also said it moved its test animals to its own facility in 2020 to improve their standard of living beyond a federally mandated minimum, working with United States Department of Agriculture (USDA) inspectors and receiving accreditation from the Association for the Assessment and Accreditation of Laboratory Animal Care (AAALAC) International.

floated far-future possibilities of mass-market implants, Neuralink is currently following in the footsteps of other research teams that have tested BCI’s potential to let people with paralysis type words or manipulate robotic arms. The company demonstrated an early iteration of its research last year when it released video of a monkey appearing to play Pong via its implant.
 
Upvote 0
The bots in the warehouse
New robots—smarter and faster—are taking over warehouses

Most picking jobs will be done by bots
https://www.economist.com/science-a...are-taking-over-distribution-centres/21807595
A decade ago Amazon started to introduce robots into its “fulfilment centres”, as online retailers call their giant distribution warehouses. Instead of having people wandering up and down rows of shelves picking goods to complete orders, the machines would lift and then carry the shelves to the pickers. That saved time and money. Amazon’s sites now have more than 350,000 robots of various sorts deployed worldwide. But even that is not enough to secure its future.
Advances in warehouse robotics, coupled with increasing labour costs and difficulty in finding workers, have created a watershed moment in the logistics industry. With covid-19 lockdowns causing supply-chain disruptions and a boom in home deliveries that is likely to endure, fulfilment centres have been working at full tilt.

Despite the robots, many firms have to bring in temporary workers to cope with increased demand during busy periods. Competition for staff is fierce. In the run-up to the holiday shopping season in December, Amazon brought in some 150,000 extra workers in America alone, offering sign-on bonuses of up to $3,000.

The long-term implications of such a high reliance on increasingly hard-to-find labour in distribution is clear, according to a new study by McKinsey, a consultancy: “Automation in warehousing is no longer just nice to have but an imperative for sustainable growth.”

This means more robots are needed, including newer, more efficient versions to replace those already at work and advanced machines to take over most of the remaining jobs done by humans. As a result, McKinsey forecasts the warehouse-automation market will grow at a compound annual rate of 23% to be worth more than $50bn by 2030.

The new robots are coming. One of them is the prototype 600 Series bot. This machine “changes everything” according to Tim Steiner, chief executive of Ocado Group, which began in 2002 as an online British grocer and has evolved over the years into one of the leading providers of warehouse robotics.

The 600 Series is a strange-looking beast, much like a box on wheels made out of skeletal parts. That is because more than half its components are 3d-printed. As 3d-printing builds things up layer by layer it allows the shapes to be optimised, thus using the least amount of material. As a result, the 600 Series is five times lighter than the company’s present generation of bots, which makes it more agile and less demanding on battery power.

March of the machines
Ocado’s bots work in what is known as the “Hive”, a giant metallic grid at the centre of its fulfilment centres. Some of these Hives are bigger than a football pitch.

Each cell on the grid contains products stored in plastic crates, stacked 21 deep. As orders arrive, a bot is dispatched to extract a crate and transport it to a picking station, where a human worker takes all the items they need, scans each one and puts them into a bag, much as happens at a supermarket checkout.

It could take an hour or so walking around a warehouse to collect each item manually for a large order. But as hundreds of bots operate on the grid simultaneously, they are much faster. The bots are choreographed by an artificially intelligent computer system, which communicates with each machine over a wireless network. The system allows Ocado’s current bot, the 500 Series, to gather all the goods required for a 50-item order in less than five minutes.



The new 600 Series will match or better its predecessor’s performance and use less energy. It also “unlocks a cascade of benefits”, says Mr Steiner, allowing Hives to be smaller and lighter. This means they can be installed in weeks rather than months and at a lower cost. That will make “micro” fulfilment centres viable. Most fulfilment centres are housed in large buildings on out-of-town trading estates, but smaller units could be sited in urban areas closer to customers. This would speed up deliveries, in some cases to within hours.

Amazon is also developing more-efficient robots. Its original machines were known as Kivas, after Kiva Systems, the Massachusetts-based firm that manufactured them. The Kiva is a squat device which can slip under a stack of head-height shelves in which goods are stored. The robot then lifts and carries the shelves to a picking station. In 2012 Amazon bought Kiva Systems for $775m and later changed its name to Amazon Robotics.

Welcome to the jungle
Amazon Robotics has since developed a family of bots, including a smaller version of a Kiva called Pegasus. These will allow it to pack more goods into its fulfilment centres and also use bots in smaller inner-city distribution sites. To prepare for a more automated future, Amazon Robotics recently opened a new robot manufacturing plant in Westborough, Massachusetts, to boost its output.

In 2014, when it became clear that future Kivas would be made exclusively for Amazon, Romain Moulin and Renaud Heitz, a pair of engineers working for a medical firm, decided to set up Exotec, a French rival, to produce a different sort of robotic warehouse. The firm has developed a three-dimensional system, which uses bots called Skypods. Looking a bit like Kivas, they also roam the warehouse floor. But instead of moving shelves, Skypods climb them. Once the robot reaches the necessary level, it extracts a crate, climbs down and delivers it to a picking station.

Skypods, says Mr Moulin, maximise the space in a warehouse because they can ascend shelving stacked 12 metres high. Being modular, the system can be expanded easily. As well as returning crates to the shelves, Skypods also take them to places to be refilled.

A number of retailers have started using Skypods, including Carrefour, a giant French supermarket group, gap, an American clothing firm, and Uniqlo, a Japanese one. Because such robots move quickly and could cause injury—Skypods zoom along at four metres per second (14kph)—they tend to operate in closed areas. If Amazon’s staff need to enter the robot area they don a special safety vest. This contains electronics which signal to any nearby bots that a human is present. The bot will then stop or take an alternative route.

Some robots, however, are designed to work alongside people in warehouses. They often ferry things between people taking goods off shelves and pallets to people putting them into bags and boxes for shipping. Such systems can avoid the cost of installing fixed infrastructure, which lets warehouses be reconfigured quickly—useful for logistics centres that work for multiple retailers and have to deal with constantly changing product lines.

When robots work among people, however, they have to be fitted with additional safety systems, such as cameras, radar and other sensors, to avoid bumping into staff. Hence they tend to move slowly and are cautious, which can result in bots frequently coming to a standstill and slowing operations. However, machines that are more capable and aware of their surroundings are on the way.

For instance, nec, a Japanese electronics group, has started using “risk-sensitive stochastic control technology”, which is software similar to that used in finance to avoid high-risk investments. In this case, though, it allows a robot to weigh up risks when taking any action, such as selecting the safest and fastest route through a warehouse. In trials, nec says it doubles the average speed of a robot without making compromises on safety.

New tricks
The toughest job to automate in a warehouse is picking and packing, hence the demand for extra pairs of hands during busy periods. This task is far from easy for robots because fulfilment centres stock tens of thousands of different items, in many shapes, sizes and weights.

Nevertheless, Amazon, Ocado, Exotec and others are beginning to automate the task by placing robotic arms at some picking stations. These arms tend to use cameras and read barcodes to identify goods, and suction pads and other mechanisms to pick them up. Machine learning, a form of ai, is employed to teach the robots how to handle specific items, for example not to put potatoes on top of eggs.

Ocado is also developing an arm which could bypass a picking station and take items directly from crates in the Hive. Fetch Robotics, a Silicon Valley producer of logistics robots that was acquired last year by Zebra Technologies, a computing firm, has developed a mobile picking arm which can travel around a fulfilment centre.

Boston Dynamics, another Massachusetts robot-maker, has come up with a heavyweight mobile version called Stretch, which can unpack lorries and put boxes on pallets. On January 26th dhl, a logistics giant, placed the first order for Stretch robots. It will deploy them in its North American warehouses over the next three years.

That timetable gives a clue that progress will not be rapid. It will take ten to 15 years before robots begin to be adept at picking and packing goods, reckons Zehao Li, the author of a new report on warehouse robotics for idTechEx, a firm of British analysts. Some companies think their bots will be able to pick 80% or so of their stock over the coming years, although much depends on the range of goods carried by different operations.

Objects with irregular shapes, like bananas or loose vegetables, can be hard for a robot to grasp if it has primarily been built to pick up products in neat packages. The bot might also be restricted in what weight it can lift, so would struggle with a flat-screen television or a heavy cask of beer. Further into the future, systems could emerge to overcome many of these limitations, such as multi-arm robots.

So what jobs will remain? On the warehouse floor, at least, that mainly leaves technicians maintaining and fixing robots, says Mr Li. He thinks there are also likely to be a handful of supervisors watching over the bots and lending a hand if there remains anything that their mechanical brethren still can’t handle. It is not just inside the warehouse where jobs will go, but outside, too, once driverless delivery vehicles are allowed. At that point many products will travel through the supply chain and to people’s homes untouched by human hand.

People will also be employed building robots. Amazon Robotics’s new factory will create more than 200 new manufacturing jobs, although that dwindles into insignificance compared with the more than 1m jobs which the pioneer of e-commerce has created since the first robots arrived in its fulfilment centres. A lot of those jobs are bound to go, although many are monotonous and strenuous, which is why they are hard to fill.

However, other jobs will emerge. Technological change inevitably creates new roles for people. In the 1960s there used to be thousands of telephone switchboard operators, a job which has almost disappeared since exchanges became automated. But the number of other jobs in telecoms has soared. As logistics gets more efficient through greater automation, and online businesses grow, the overall level of employment in e-commerce should still increase. Many of these roles will be different sorts of jobs, just as there are many different sorts of robot.





 
Upvote 0
The new version of GPT-3 is much better behaved (and should be less toxic)
OpenAI has trained its flagship language model to follow instructions, making it spit out less unwanted text—but there's still a way to go.
By Will Douglas Heavenarchive page
January 27, 2022
https://www.technologyreview.com/20...atbot-language-model-ai-toxic-misinformation/
OpenAI has built a new version of GPT-3, its game-changing language model, that it says does away with some of the most toxic issues that plagued its predecessor. The San Francisco-based lab says the updated model, called InstructGPT, is better at following the instructions of people using it—known as “alignment” in AI jargon—and thus produces less offensive language, less misinformation, and fewer mistakes overall—unless explicitly told not to do so.

Large language models like GPT-3 are trained using vast bodies of text, much it taken from the internet, in which they encounter the best and worst of what people put down in words. That is a problem for today's chatbots and text-generation tools. The models soak up toxic language—from text that is racist and misogynistic or that contains more insidious, baked-in prejudices—as well as falsehoods.

OpenAI has made IntructGPT the default model for users of its application programming interface (API)—a service that gives access to the company’s language models for a fee. GPT-3 will still be available but OpenAI does not recommend using it. “It’s the first time these alignment techniques are being applied to a real product,” says Jan Leike, who co-leads OpenAI’s alignment team.

Previous attempts to tackle the problem included filtering out offensive language from the training set. But that can make models perform less well, especially in cases where the training data is already sparse, such as text from minority groups.

The OpenAI researchers have avoided this problem by starting with a fully trained GPT-3 model. They then added another round of training, using reinforcement learning to teach the model what it should say and when, based on the preferences of human users.

To train InstructGPT, OpenAI hired 40 people to rate GPT-3’s responses to a range of prewritten prompts, such as, “Write a story about a wise frog called Julius” or “Write a creative ad for the following product to run on Facebook.” Responses that they judged to be more in line with the apparent intention of the prompt-writer were scored higher. Responses that contained sexual or violent language, denigrated a specific group of people, expressed an opinion, and so on, were marked down. This feedback was then used as the reward in a reinforcement learning algorithm that trained InstructGPT to match responses to prompts in ways that the judges preferred.

OpenAI found that users of its API favored InstructGPT over GPT-3 more than 70% of the time. “We're no longer seeing grammatical errors in language generation,” says Ben Roe, head of product at Yabble, a market research company that uses OpenAI’s models to create natural-language summaries of its clients’ business data. “There’s also clear progress in the new models' ability to understand and follow instructions."

“It is exciting that the customers prefer these aligned models so much more,” says Ilya Sutskever, chief scientist at OpenAI. “It means that there are lots of incentives to build them.”

The researchers also compared different-sized versions of InstructGPT and found that users preferred the responses of a 1.3 billion-parameter InstructGPT model to those of a 175 billion-parameter GPT-3, even though the model was more than 100 times smaller. That means alignment could be an easy way of making language models better, rather than just increasing their size, says Leike.

“This work takes an important step in the right direction,” says Douwe Kiela, a researcher at Hugging Face, an AI company working on open-source language models. He suggests that the feedback-driven training process could be repeated over many rounds, improving the model even more. Leike says OpenAI could do this by building on customer feedback.

InstructGPT still makes simple errors, sometimes producing irrelevant or nonsensical responses. If given a prompt that contains a falsehood, for example, it will take that falsehood as true. And because it has been trained to do what people ask, InstructGPT will produce far more toxic language than GPT-3 if directed to do so.

Ehud Reiter, who works on text-generation AI at the University of Aberdeen, UK, welcomes any technique that reduces the amount of misinformation language models produce. But he notes that for some applications, such as AI that gives medical advice, no amount of falsehood is acceptable. Reiter questions whether large language models, based on black-box neural networks, could ever guarantee user safety. For that reason, he favors a mix of neural networks plus symbolic AI, hard-coded rules constrain what a model can and cannot say.
 
Upvote 0
Marine propulsion
Nature does not use propellers. So why do people?

Real fintech
iT6ieAS.jpg

https://www.economist.com/science-a...-not-use-propellers-so-why-do-people/21806832
No known sea-creature uses propellers. Perhaps that is because they are too difficult to evolve from existing animal body plans. Or perhaps it is because they are not particularly good at doing what they do. When pushing water around for propulsive purposes, bigger is not only more powerful but also more efficient. But the bigger a propeller is, the harder it is to accommodate to a hull and the more it risks adding to a ship’s draft and thus snagging the seabed. Even the biggest ships’ propellers are therefore only around ten metres in diameter.

Fins and flippers, by contrast, extend sideways, so do not suffer from such geometric restrictions. That means they can get big enough to push a lot more water around. Nor, unlike propellers, need they be rigid. In fact, being flexible is almost part of the definition (a rigid fin might better be described as an oar). They are therefore not easily damaged by contact with the seabed or other objects. Fins have thus become evolution’s go-to accoutrement for marine propulsion. From fish, via ichthyosaurs, to dolphins and whales, they turn up again and again. So, from plesiosaurs and turtles to seals and penguins, do their cousins, flippers.

In light of this evolutionary vote of confidence in fins, ships’ propellers look like a technology ripe for a bit of biomimetic disruption. And that may now have arrived in the shape of Benjamin Pietro Filardo, an ex-marine biologist and architect who was looking into ways of designing devices to extract power from water currents. His plan was to use flexible materials, so that they could easily shake off any debris which got entangled in them. He then realised that the undulations involved might also usefully be turned into thrust.

Mr Filardo has put his money where his mouth is. His firm, Pliant Energy Systems, based in New York, has developed Velox (pictured), a prototype propelled by flexible fins, port and starboard, that are reminiscent of yet another animal’s approach to swimming—the undulating mantle of a cuttlefish. Velox can travel on the surface, underwater, and also across mud or ice, with its fins then acting in the manner of a pair of robotic caterpillars.

According to Mr Filardo, Velox produces around three times as much thrust per unit of energy expended as a typical small boat’s propeller can manage. And he hopes, soon, to do even better than this. Having demonstrated his device to America’s Office of Naval Research, he has piqued their interest. The result is a commission for a follow-up, c-Ray, that should be lighter, faster and yet more efficient.

Unlike Velox, which is controlled by cable, c-Ray will be autonomous—the ultimate aim being to develop co-operative swarms of craft for jobs such as mine detection and removal, reconnaissance and anti-submarine patrols. From a naval perspective, however, undulatory propulsion may have a yet-more-important advantage. Submarines are often detected by the noise they make, much of which comes from the propeller and the shaft driving it. Undulatory propulsion, moving more water at lower speed, should be quieter than any propeller. Nor does it involve a noisy phenomenon called cavitation, caused by transient gas bubbles that form in response to propeller blades’ pressure.

This matters, because Velox-like fins may prove to be a technology that can be scaled up to propel full-sized submarines. As Mr Filardo observes, the largest marine animals of all, the great whales, are fin-propelled, even if their fins are arranged differently from Velox’s. Indeed, the biggest of the lot, a blue whale, can travel at more than 20 knots, which would not disgrace the average submarine. Previous attempts to scale-up fin-propulsion have failed, he says, because they have not found the necessary compromise between stiffness and flexibility. He reckons he has.

Travelling waves
Even if they do not make the big-time, naval-warfare-wise, swarms of Velox’s descendants might be deployed for tasks from harvesting scallops without destructive trawling to mining nodules from the seabed without harming habitats—for undulatory propulsion does not disturb sediment. In a world where the creation of new carbon sinks may become big business, they might even be used to plant beds of seagrass on a vast scale. Craft propelled by undulation would also have less risk of harming swimming mammals, such as manatees and human beings, which sometimes get chewed up by propellers.

Mr Filardo is even looking into the idea of merging his interests, by designing a craft with undulating propulsion that can moor itself and then recharge its batteries from disturbances to its fins caused by passing ocean currents. Just how far he or others will be able to push this new approach to propulsion remains to be seen. But if the engineering works, and can indeed be scaled up, ship’s propellers may one day look as old-fashioned as sails.





 
Upvote 0
EXPERTS SAY THAT SOON, ALMOST THE ENTIRE INTERNET COULD BE GENERATED BY AI
https://futurism.com/the-byte/ai-internet-generation
Generation AI
The Internet of the future could be written by bots, but will that make it better or worse? Experts at the Copenhagen Institute for Future Studies (CIFS) are raising questions about AI-generated content, and how it could come to dominate the metaverse and other digital locations.

CIFS expert Timothy Shoup estimates that 99 percent to 99.9 percent of the internet’s content will be AI-generated by 2025 to 2030, especially if models like OpenAI’s GPT-3 achieve wider adoption.

“The internet would be completely unrecognizable,” Shoup told colleague Sofie Hvitved.

As its capabilities advance, the idea is that AI could start to generate entire online worlds, along with all the stuff that inhabits them — not to mention all the online material that’s currently mostly made by humans.

“Earlier this year, OpenAI released DALL-E, which uses a 12-billion-parameter version of GPT-3 to interpret natural language inputs and generate corresponding images,” Hvitved wrote. “DALL-E can now create images of realistic objects as well as objects that do not exist in reality.”

Future Net
It’s not inherently a bad thing for AIs to generate web content. In theory, their work could build virtual worlds that are more inclusive around things like gender, race, and culture. And programs like Copilot, which Hvitved says helps GitHub coders create up to 30 percent of their code by typing only in simple human language, could open up new creative realms to far more people.

That’d require a broad realignment of the current reality of AI, though, which is currently widely known to reproduce the biases of its creators — and that’s without getting into the fear that it will start to fill the web with limitless amounts of targeted misinformation.

That said, if there are ways to make the internet a better, safer place with less work, we can’t think of a reason not to try a well-regulated and balanced system that leverages the power of advanced AI.

Using artificial intelligence to find anomalies hiding in massive datasets
A new machine-learning technique could pinpoint potential power grid failures or cascading traffic bottlenecks in real time.
Adam Zewe | MIT News Office
https://news.mit.edu/2022/artificial-intelligence-anomalies-data-0225
Identifying a malfunction in the nation’s power grid can be like trying to find a needle in an enormous haystack. Hundreds of thousands of interrelated sensors spread across the U.S. capture data on electric current, voltage, and other critical information in real time, often taking multiple recordings per second.

Researchers at the MIT-IBM Watson AI Lab have devised a computationally efficient method that can automatically pinpoint anomalies in those data streams in real time. They demonstrated that their artificial intelligence method, which learns to model the interconnectedness of the power grid, is much better at detecting these glitches than some other popular techniques.

Because the machine-learning model they developed does not require annotated data on power grid anomalies for training, it would be easier to apply in real-world situations where high-quality, labeled datasets are often hard to come by. The model is also flexible and can be applied to other situations where a vast number of interconnected sensors collect and report data, like traffic monitoring systems. It could, for example, identify traffic bottlenecks or reveal how traffic jams cascade.

“In the case of a power grid, people have tried to capture the data using statistics and then define detection rules with domain knowledge to say that, for example, if the voltage surges by a certain percentage, then the grid operator should be alerted. Such rule-based systems, even empowered by statistical data analysis, require a lot of labor and expertise. We show that we can automate this process and also learn patterns from the data using advanced machine-learning techniques,” says senior author Jie Chen, a research staff member and manager of the MIT-IBM Watson AI Lab.

The co-author is Enyan Dai, an MIT-IBM Watson AI Lab intern and graduate student at the Pennsylvania State University. This research will be presented at the International Conference on Learning Representations.

Probing probabilities

The researchers began by defining an anomaly as an event that has a low probability of occurring, like a sudden spike in voltage. They treat the power grid data as a probability distribution, so if they can estimate the probability densities, they can identify the low-density values in the dataset. Those data points which are least likely to occur correspond to anomalies.

Estimating those probabilities is no easy task, especially since each sample captures multiple time series, and each time series is a set of multidimensional data points recorded over time. Plus, the sensors that capture all that data are conditional on one another, meaning they are connected in a certain configuration and one sensor can sometimes impact others.

To learn the complex conditional probability distribution of the data, the researchers used a special type of deep-learning model called a normalizing flow, which is particularly effective at estimating the probability density of a sample.

They augmented that normalizing flow model using a type of graph, known as a Bayesian network, which can learn the complex, causal relationship structure between different sensors. This graph structure enables the researchers to see patterns in the data and estimate anomalies more accurately, Chen explains.

“The sensors are interacting with each other, and they have causal relationships and depend on each other. So, we have to be able to inject this dependency information into the way that we compute the probabilities,” he says.

This Bayesian network factorizes, or breaks down, the joint probability of the multiple time series data into less complex, conditional probabilities that are much easier to parameterize, learn, and evaluate. This allows the researchers to estimate the likelihood of observing certain sensor readings, and to identify those readings that have a low probability of occurring, meaning they are anomalies.

Their method is especially powerful because this complex graph structure does not need to be defined in advance — the model can learn the graph on its own, in an unsupervised manner.

A powerful technique

They tested this framework by seeing how well it could identify anomalies in power grid data, traffic data, and water system data. The datasets they used for testing contained anomalies that had been identified by humans, so the researchers were able to compare the anomalies their model identified with real glitches in each system.

Their model outperformed all the baselines by detecting a higher percentage of true anomalies in each dataset.

“For the baselines, a lot of them don’t incorporate graph structure. That perfectly corroborates our hypothesis. Figuring out the dependency relationships between the different nodes in the graph is definitely helping us,” Chen says.

Their methodology is also flexible. Armed with a large, unlabeled dataset, they can tune the model to make effective anomaly predictions in other situations, like traffic patterns.

Once the model is deployed, it would continue to learn from a steady stream of new sensor data, adapting to possible drift of the data distribution and maintaining accuracy over time, says Chen.

Though this particular project is close to its end, he looks forward to applying the lessons he learned to other areas of deep-learning research, particularly on graphs.

Chen and his colleagues could use this approach to develop models that map other complex, conditional relationships. They also want to explore how they can efficiently learn these models when the graphs become enormous, perhaps with millions or billions of interconnected nodes. And rather than finding anomalies, they could also use this approach to improve the accuracy of forecasts based on datasets or streamline other classification techniques.


What Do We Do About the Biases in AI?
by James Manyika, Jake Silberg, and Brittany Presten
Summary: Over the past few years, society has started to wrestle with just how much human biases can make their way into artificial intelligence systems—with harmful results. At a time when many companies are looking to deploy AI systems across their operations, being acutely aware of those risks and working to reduce them is an urgent priority. What can CEOs and their top management teams do to lead the way on bias and fairness? Among others, we see six essential steps: First, business leaders will need to stay up to-date on this fast-moving field of research. Second, when your business or organization is deploying AI, establish responsible processes that can mitigate bias. Consider using a portfolio of technical tools, as well as operational practices such as internal “red teams,” or third-party audits. Third, engage in fact-based conversations around potential human biases. This could take the form of running algorithms alongside human decision makers, comparing results, and using “explainability techniques” that help pinpoint what led the model to reach a decision – in order to understand why there may be differences. Fourth, consider how humans and machines can work together to mitigate bias, including with “human-in-the-loop” processes. Fifth, invest more, provide more data, and take a multi-disciplinary approach in bias research (while respecting privacy) to continue advancing this field. Finally, invest more in diversifying the AI field itself. A more diverse AI community would be better equipped to anticipate, review, and spot bias and engage communities affected.

Link to Whole Article: https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai
 
Last edited:
Upvote 0
Transhumanist Conference Explores Crypto, Blockchain, and… Theology?
https://www.techbuzz.news/transhumanist-conference-explores-crypto-blockchain-and-theology-/
The perennial discussion of humanity's origin, destiny, and place in the universe is maybe the oldest and most enduring of any human question.

The current dialogue around blockchains, cryptocurrencies, and decentralized networks is one of the newest and hottest topics in the proverbial public square.

An upcoming conference in Provo aims to bring those seemingly disparate topics together. The March 19th event in Provo promises stimulating conversation and colliding ideas, with speakers among the foremost thinkers in the increasingly significant field of distributed ledger technology.

“One of our big values at the MTA is this notion of friendship and building bridges across ideological divides,” says Carl Youngblood, President and CEO of the Mormon Transhumanist Association. “So we get a lot of people who don't normally talk to each other to get in the same room together and share weird ideas.”

For many first hearing of the Mormon Transhumanist Association, the initial reaction is something like confusion. What exactly is the Mormon Transhumanist Association?

Let’s start with the most esoteric word in that title. Transhumanism is a philosophy examining how humanity might (and maybe should) carefully use technology to transcend present limitations like aging and intelligence. The movement was first catalyzed in the 1990s by British philosopher and technologist Max More, himself a past speaker at the annual MTA conference.

The Mormon Transhumanist Association (MTA) is a non profit “dedicated to radical human flourishing through the compassionate use of science and technology,” according to Youngblood.

1143f4a0-89f3-4dea-ab81-8d54930b8986.jpg
Youngblood, a software engineer who was very early to blockchain technology in 2010, co-founded the group in 2006 along with colleague Lincoln Cannon. Both men are Provo residents and BYU alumni. Despite (or maybe because of) the arcane topic, the MTA has had active participation and ongoing meetings for at least the past decade. MTA members span the religious and ideological gamut, from strict Sunday School orthodoxy, to those only tangentially familiar with the Church of Jesus Christ of Latter-Day Saints.

The site continues, “Although we are neither a religious organization nor affiliated with any religious organization, we support our members in their personal religious affiliations, Mormon or otherwise, and encourage them to adapt Transhumanism to their unique situations.”

Youngblood says the upcoming conference is ideal for anybody interested in blockchain technology, but is quick to point out, “It's not just for crypto bros. There's a discussion about the ways that society is being transformed by these technologies—things like new ways of organizing ourselves in terms of corporate governance, public sector governance, economic development, international development, and how we can help people lift out of poverty through some of these technologies.” Of course there will also be discussion around “the philosophical and ethical ramifications, and even some of the potential theological implications of the notion of decentralization. For example, people interested in religion and Mormonism in particular, but anyone who loves philosophy or religion, would find some interest in the very notion that perhaps decentralization is necessary for humans to flourish into their full potential.”

Every year the conference brings an array of qualified speakers from religious and technology fields. Past speakers include religious scholars like Richard Bushman, Rosalynde Welch, Adam Miller, and Melissa Inouye, each from a background and emphasis with the Church of Jesus Christ of Latter-Day Saints, the eponymous “Mormons” of the MTA. From the technological and transhumanist side, past speakers include longevity researcher Aubrey DeGray, transhumanist philosopher and cryonics researcher Max More, and cryptographer Ralph Merkle. Merkle was the inventor of cryptographic hashing and co-inventor of public key cryptography, foundational technologies for basic internet security as well as the burgeoning cryptocurrency and blockchain space.

This year’s conference will focus on the “Decentralization of Power” and new forms of governance. Headline speakers include Laura Shin, former senior editor at Forbes and award-winning podcast host and author, and Tomicah Tillemann. Shin launched the “Unchained” podcast in 2016 and became one of the foremost journalists exploring cryptocurrency and related topics. Tillemann is a public policy expert with a crypto venture fund led by former a16z Partner Katie Haun. Tillemann previously worked with the State Department and Senate Foreign Relations Committee.

Additional speakers include:


 
Upvote 0
Construction techniques
The rise of 3D-printed houses

Your next home could be a printout
https://www.economist.com/science-and-technology/the-rise-of-3d-printed-houses/21803667
20210821_stp001.jpg

A batch of new houses across California is selling unusually fast. In the past two months, 82 have been snapped up, and the waiting list is 1,000 long. That demand should, though, soon be satisfied—for, while it can take weeks to put up a conventional bricks-and-mortar dwelling, Palari Homes and Mighty Buildings, the collaborators behind these houses, are able to erect one in less than 24 hours. They can do it so rapidly because their products are assembled from components prefabricated in a factory. This is not, in itself, a new idea. But the components involved are made in an unusual way: they are printed.

Three-dimensional (3d) printing has been around since the early 1980s, but is now gathering steam. It is already employed to make things ranging from orthopaedic implants to components for aircraft. The details vary according to the products and processes involved, but the underlying principle is the same. A layer of material is laid down and somehow fixed in place. Then another is put on top of it. Then another. Then another. By varying the shape, and sometimes the composition of each layer, objects can be crafted that would be difficult or impossible to produce with conventional techniques. On top of this, unlike conventional manufacturing processes, no material is wasted.

Just press “print”
In the case of Palari Homes and Mighty Buildings, the printers are rather larger than those required for artificial knees and wing tips, and the materials somewhat cruder. But the principle is the same. Nozzles extrude a paste (in this case a composite) which is then cured and hardened by ultraviolet light. That allows Mighty Buildings to print parts such as eaves and ceilings without the need for supporting moulds—as well as simpler things like walls. These are then put together on site and attached to a permanent foundation by Palari Homes’ construction workers.

Not only does 3d-printing allow greater versatility and faster construction, it also promises lower cost and in a more environmentally friendly approach than is possible at present. That may make it a useful answer to two challenges now facing the world: a shortage of housing and climate change. About 1.6bn people—more than 20% of Earth’s population—lack adequate accommodation. And the construction industry is responsible for 11% of the world’s man-made carbon-dioxide emissions. Yet the industry’s carbon footprint shows no signs of shrinking.


Automation brings huge cost savings. Mighty Buildings says computerising 80% of its printing process means the firm needs only 5% of the labour that would otherwise be involved. It has also doubled the speed of production. This is welcome news, the construction industry having struggled for years to improve its productivity. Over the past two decades this has grown at only a third of the rate of productivity in the world economy as a whole, according to McKinsey, a consultancy. Digitalisation has been slower than in nearly any other trade. The industry is also plagued, in many places, by shortages of skilled labour. And that is expected to get worse. In America, for example, around 40% of those employed in construction are expected to retire within a decade.

The environmental benefits come in several ways, but an important one is that there is less need to move lots of heavy stuff about. Palari Homes, for instance, estimates that prefabricating its products reduces the number of lorry journeys involved in building a house sufficiently to slash two tonnes off the amount of carbon dioxide emitted per home.

Palari Homes and Mighty Buildings are not, moreover, alone in their endeavours. Similar projects are being started up all over the place. The vast majority print structures using concrete. 14Trees, a joint venture between Holcim—the world’s biggest cement-maker—and cdc Group, a British-government development-finance outfit, operates in Malawi. It says it is able to print a house there in just 12 hours, with a price tag of less than $10,000. Besides being cheap and quick, 14Trees says this process is green as well. Holcim claims that by depositing the precise amount of cement required and thereby reducing waste, 3d printing generates only 30% as much carbon dioxide as using burnt-clay brick, a common technique in Malawi.

In Mexico, meanwhile, a charity for the homeless called New Story has created a partnership with icon, a 3d-printing firm, to erect ten houses with floor areas of 46 square metres. Each was printed in around 24 hours (though these hours were spread over several days), with the final features assembled by Échale, another local charity. And in Europe the keys to the continent’s first 3d-printed home, in Eindhoven, in the Netherlands (pictured above), were handed over to its tenants on July 30th.

Layer cakes
The house in question, the first of five detached, two-bedroom dwellings in a project co-ordinated by Eindhoven’s municipal government and the city’s University of Technology, is a collaboration between several firms. The Dutch arm of Saint-Gobain, a French building-materials company, developed the concrete mortar needed. Van Wijnen, a construction firm, built the thing, while Witteveen+Bos, a consultancy, was responsible for the engineering. It is being rented out by its owner, Vesteda, a Dutch residential-property investor.

Making the cement involved in projects like this is not, however, a green process. It turns calcium carbonate in the form of limestone into calcium oxide and carbon dioxide, and is reckoned responsible for about 8% of anthropogenic emissions of that gas. A group at Texas a&m University, led by Sarbajit Banerjee, has therefore developed a way to dispense with it.


Dr Banerjee’s new building material was inspired by a project he masterminded some years ago to construct supply roads to remote parts of the Canadian province of Alberta using stuff immediately to hand. The road metal he devised combined local soil with a mulch of wood fibres, and was held together by liquid or water-soluble silicates that then hardened and acted as cement. To build houses he uses whatever clay and rock debris is lying around under the topsoil near the construction site, crushes it into a powder and blends it with silicates. The result can then be squeezed through a nozzle, after which it rapidly consolidates and gains strength, so as to hold its shape and bear the weight of the next layer. The process is thus doubly green. It eliminates both cement and the need to transport to the site, often over long distances, the sand and aggregates used in conventional concrete.

Concrete benefits
There are limitations to 3d-printed homes. For a start, construction codes need to be tweaked to accommodate them. To this end ul, one of America’s largest certifying agencies, has collaborated with Mighty Buildings to develop the first 3d-printing standard. The guidelines will be included in the new International Residential Code, which is in use in, or has been adopted by, all American states save Wisconsin. While this is a welcome boost to a fledgling industry, most governments have yet to come up with country-specific standards. There are also questions about the quality and finish of homes built by 3d printers.

Even so, the direction of travel looks promising. Last year, plans for a 3d-printed apartment building were approved in Germany. This three-floored structure, assembled by Peri, a German construction company, from parts made using printers developed by Cobod, a Danish firm, will contain five flats. Use of the technology is also expanding in the Middle East and Asia. Dubai’s government wants a quarter of new buildings in the country to be 3d-printed by 2030, and is dedicating a district on the outskirts of its eponymous capital to host 3d-printing companies and their warehouses. Saudi Arabia wants to use 3d printing to build 1.5m houses over the next decade. And India’s Ministry of Housing and Urban Affairs wants to use 3dprinting to address the country’s housing shortages.

If successful, building by 3d printing is likely to spread beyond housing. Opportunities also exist in warehousing, offices and other commercial buildings. And beyond earthly structures, nasa, America’s space agency, is exploring the use of 3d printing to build landing pads, accommodation and roads on Mars and the Moon. There is no soil on those two celestial bodies, just shattered rock called regolith. Dr Banerjee’s group, which is working with nasa, says its approach to 3dprinting functions just as well with this material. “We would ultimately like to have property on Mars and the Moon but we’re not going to be able to take concrete up there with us,” says Dr Banerjee. “We’re going to have to work with regolith.” ■


 
Upvote 0
Back
Top