Just because I don't care doesn't mean I don't understand.
623 stories
·
3 followers

Prince Valiant comic strip for March 23, 2025

1 Comment

Prince Valiant comic strip for March 23, 2025

View on King Comics - Generated by comics-rss on GitHub

Read the whole story
jgbishop
1 day ago
reply
Whaaaaaaaaat?!?!?
Durham, NC
Share this story
Delete

I'd like to take a moment to speak to you about the Adobe PSD format (2009)

1 Comment
xee/XeePhotoshopLoader.m at 4fa3a6d609dd72b8493e52a68f316f7a02903276 · gco/xee · GitHub

{{ message }}

Latest commit

496 lines (404 loc) · 14.3 KB

Adblock test (Why?)

Read the whole story
jgbishop
2 days ago
reply
Haha; great comment in this code. I've always suspected that Adobe's software was poorly written. Both Photoshop and Lightroom are abysmally slow, and this seems to prove it
Durham, NC
Share this story
Delete

The Cybernetic Teammate

1 Comment

Over the past couple years, we have learned that AI can boost the productivity of individual knowledge workers ranging from consultants to lawyers to coders. But most knowledge work isn’t purely an individual activity; it happens in groups and teams. And teams aren't just collections of individuals – they provide critical benefits that individuals alone typically can't, including better performance, sharing of expertise, and social connections.

So, what happens when AI acts as a teammate? This past summer we conducted a pre-registered, randomized controlled trial of 776 professionals at Procter and Gamble, the consumer goods giant, to find out.

We are ready to share the results in a new working paper: The Cybernetic Teammate: A Field Experiment on Generative AI Reshaping Teamwork and Expertise. Given the scale of this project, it shouldn’t be a surprise that this paper was a massive team effort coordinated by the Digital Data Design Institute at Harvard and led by Fabrizio Dell’Acqua, Charles Ayoubi, and Karim Lakhani, along with Hila Lifshitz, Raffaella Sadun, Lilach Mollick, me, and our partners at Procter and Gamble: Yi Han, Jeff Goldman, Hari Nair, and Stewart Taub.

We wanted this experiment to be a test of real-world AI use, so we were able to replicate the product development process at P&G, thanks to the cooperation and help of the company (which had no control over the results or data). To do that, we ran one-day workshops where professionals from Europe and the US had to actually develop product ideas, packaging, retail strategies and other tasks for the business units they really worked for, which included baby products, feminine care, grooming, and oral care. Teams with the best ideas had them submitted to management for approval, so there were some real stakes involved.

We also had two kinds of professionals in our experiment: commercial experts and technical R&D experts. They were generally very experienced, with over 10 years of work at P&G alone. We randomly created teams consisting of one person in each specialty. Half were given GPT-4 or GPT-4o to use, and half were not. We also picked a random set of both types of specialists to work alone, and gave half of them access to AI. Everyone assigned to the AI condition was given a training session and a set of prompts they could use or modify. This design allowed us to isolate the effects of AI and teamwork independently and in combination. We measured outcomes across multiple dimensions including solution quality (as determined by at least two expert judges per solution), time spent, and participants' emotional responses. What we found was interesting.

AI boosts performance

When working without AI, teams outperformed individuals by a significant amount, 0.24 standard deviations (providing a sigh of relief for every teacher and manager who has pushed the value of teamwork). But the surprise came when we looked at AI-enabled participants. Individuals working with AI performed just as well as teams without AI, showing a 0.37 standard deviation improvement over the baseline. This suggests that AI effectively replicated the performance benefits of having a human teammate – one person with AI could match what previously required two-person collaboration.

Teams with AI performed best overall with a 0.39 standard deviation improvement, though the difference between individuals with AI and teams with AI wasn't statistically significant. But we found an interesting pattern when looking at truly exceptional solutions, those ranking in the top 10% of quality. Teams using AI were significantly more likely to produce these top-tier solutions, suggesting that there is value in having human teams working on a problem that goes beyond the value of working with AI alone.

Both AI-enabled groups also worked much faster, saving 12-16% of the time spent by non-AI groups while producing solutions that were substantially longer and more detailed than those from non-AI groups.

Expertise boundaries vanish

Without AI, we saw clear professional silos in how people approached problems. R&D specialists consistently proposed technically-oriented solutions while Commercial specialists suggested market-focused ideas. When these specialists worked together in teams without AI, they produced more balanced solutions through their cross-functional collaboration (teamwork wins again!).

But this was another place AI made a big difference. When paired with AI, both R&D and Commercial professionals, in teams or when working alone, produced balanced solutions that integrated both technical and commercial perspectives. The distinction between specialists virtually disappeared in AI-aided conditions, as you can see in the graph. We saw a similar effect on teams.

This effect was especially pronounced for employees less familiar with product development. Without AI, these less experienced employees performed relatively poorly even in teams. But with AI assistance, they suddenly performed at levels comparable to teams that included experienced members. AI effectively helped people bridge functional knowledge gaps, allowing them to think and create beyond their specialized training, and helped amateurs act more like experts.

Working with AI led to better emotional experiences

A particularly surprising finding was how AI affected the emotional experience of work. Technological change, and especially AI, has often been associated with reduced workplace satisfaction and increased stress. But our results showed the opposite, at least in this case.

Positive emotions increase and negative emotions decrease after working with AI compared to teams and individuals who did not have AI access.

People using AI reported significantly higher levels of positive emotions (excitement, energy, and enthusiasm) compared to those working without AI. They also reported lower levels of negative emotions like anxiety and frustration. Individuals working with AI had emotional experiences comparable to or better than those working in human teams.

While we conducted a thorough study that involved a pre-registered randomized controlled trial, there are always caveats to these sorts of studies. For example, it is possible that larger teams would show very different results when working with AI, or that working with AI for longer projects may impact its value. It is also possible that our results represent a lower bound: all of these experiments were conducted with GPT-4 or GPT-4o, less capable models than what are available today; the participants did not have a lot of prompting experience so they may not have gotten as much benefit; and chatbots are not really built for teamwork. There is a lot more detail on all of this in the paper, but limitations aside, the bigger question might be: why does this all matter?

Why This Matters

Organizations have primarily viewed AI as just another productivity tool, like a better calculator or spreadsheet. This made sense initially but has become increasingly limiting as models get better and as recent data finds users most often employ AI for critical thinking and complex problem solving, not just routine productivity tasks. Companies that focus solely on efficiency gains from AI will not only find workers unwilling to share their AI discoveries for fear of making themselves redundant but will also miss the opportunity to think bigger about the future of work.

To successfully use AI, organizations will need to change their analogies. Our findings suggest AI sometimes functions more like a teammate than a tool. While not human, it replicates core benefits of teamwork—improved performance, expertise sharing, and positive emotional experiences. This teammate perspective should make organizations think differently about AI. It suggests a need to reconsider team structures, training programs, and even traditional boundaries between specialties. At least with the current set of AI tools, AI augments human capabilities. It democratizes expertise as well, enabling more employees to contribute meaningfully to specialized tasks and potentially opening new career pathways.

The most exciting implication may be that AI doesn't just automate existing tasks, it changes how we can think about work itself. The future of work isn't just about individuals adapting to AI, it's about organizations reimagining the fundamental nature of teamwork and management structures themselves. And that's a challenge that will require not just technological solutions, but new organizational thinking.

Subscribe now

Leave a comment

Read the whole story
jgbishop
2 days ago
reply
Very interesting!
Durham, NC
Share this story
Delete

Nightdive Studios announces launch date for System Shock 2 remake

1 Comment

System Shock 2: 25th Anniversary Edition (formerly titled System Shock 2: Enhanced Edition) will be arriving on PC, Xbox, PlayStation, and Nintendo Switch on June 26, courtesy of Nightdive Studios. Announced at the Future Game Show Spring Showcase, the 25th Anniversary Edition brings a fresh coat of paint to the classic 1999 sci-fi horror game and will be the first classic System Shock title playable on consoles.

This effort isn’t nearly as extensive as the excellent System Shock from 2023, but System Shock 2: 25th Anniversary Edition still features a massive graphics overhaul with updated textures and ultrawide support, in addition to new audio recordings and a litany of quality-of-life improvements to make the game more palatable for modern audiences. Nightdive took a similar approach to its Enhanced Edition of the original System Shock, turning the previously obtuse title into what many consider the ideal way to play the 1994 game.

Over the past 25 years, System Shock 2 has received a remarkable amount of community support in the form of patches, higher-resolution textures, and other fixes. These mods offer an excellent version of System Shock 2, but the process of getting everything running properly makes the experience relatively hostile to new players. The 25th Anniversary Edition integrates many of these improvements while optimizing aspects of System Shock 2 that haven’t been touched since 1999. System Shock 2 is one of my favorite games of all time, and I’m excited for a new generation of players to experience the maiden voyage of the Von Braun when the game comes out in June.

Read the whole story
jgbishop
3 days ago
reply
Excited for this!!!
Durham, NC
Share this story
Delete

Japanese Overdesign: Bookends that Don't Let the Books Fall Over When One is Removed

1 Comment

Books are like people, in that they need to lean on each other for support. When you remove a book from a shelf, or from between two bookends, the neighboring books close the gap by leaning.

While aesthetically displeasing, leaning books isn't a huge problem for most of us. But unsurprisingly a designer in Japan—a country obsessed with UX—has designed a way around this. This Firm Book End, from stationery brand Lihit Lab, allows books and other book-shaped media to stand on their own.

The flip-down stoppers are gravity-activated. Units can be linked side to side, allowing users to select the overall length. Rubber feet on the bottom prevent the unit from sliding.

The one shown above is A5 size, and there's a larger A4 version too, shown below.

The A5 runs ¥1,300 (USD $9) , and the A4 is ¥2,300 (USD $15).



Read the whole story
jgbishop
6 days ago
reply
Cool idea!
Durham, NC
Share this story
Delete

Horseless intelligence

1 Comment

AI is everywhere these days, and everyone has opinions and thoughts. These are some of mine.

Full disclosure: for a time I worked for Anthropic, the makers of Claude.ai. I no longer do, and nothing in this post (or elsewhere on this site) is their opinion or is proprietary to them.

How to use AI

My advice about using AI is simple: use AI as an assistant, not an expert, and use it judiciously. Some people will object, “but AI can be wrong!” Yes, and so can the internet in general, but no one now recommends avoiding online resources because they can be wrong. They recommend taking it all with a grain of salt and being careful. That’s what you should do with AI help as well.

We are all learning how to use AI well. Prompt engineering is a new discipline. It surprises me that large language models (LLMs) give better answers if you include phrases like “think step-by-step” or “check your answer before you reply” in your prompt, but they do improve the result. LLMs are not search engines, but like search engines, you have to approach them as unique tools that will do better if you know how to ask the right questions.

If you approach AI thinking that it will hallucinate and be wrong, and then discard it as soon as it does, you are falling victim to confirmation bias. Yes, AI will be wrong sometimes. That doesn’t mean it is useless. It means you have to use it carefully.

I’ve used AI to help me write code when I didn’t know how to get started because it needed more research than I could afford at the moment. The AI didn’t produce finished code, but it got me going in the right direction, and iterating with it got me to working code.

One thing it seemed to do well was to write more tests given a few examples to start from. Your workflow probably has steps where AI can help you. It’s not a magic bullet, it’s a tool that you have to learn how to use.

The future of coding

In beginner-coding spaces like Python Discord, anxious learners ask if there is any point in learning to code, since won’t AI take all the jobs soon anyway?

Simon Willison seems to be our best guide to the head-spinning pace of AI development these days (if you can keep up with the head-spinning pace of his blog!) I like what he said recently about how AI will affect new programmers:

There has never been a better time to learn to code — the learning curve is being shaved down by these new LLM-based tools, and the amount of value people with programming literacy can produce is going up by an order of magnitude.

People who know both coding and LLMs will be a whole lot more attractive to hire to build software than people who just know LLMs for many years to come.

Simon has also emphasized in his writing what I have found: AI lets me write code that I wouldn’t have undertaken without its help. It doesn’t produce the finished code, but it’s a helpful pair-programming assistant.

Can LLMs think?

Another objection I see often: “but LLMs can’t think, they just predict the next word!” I’m not sure we have a consensus understanding of what “think” means in this context. Airplanes don’t fly in the same way that birds do. Automobiles don’t run in the same way that horses do. The important thing is that they accomplish many of the same tasks.

OK, so AI doesn’t think the same way that people do. I’m fine with that. What’s important to me is that it can do some work for me, work that could also be done by people thinking. Cars (“horseless carriages”) do work that used to be done by horses running. No one now complains that cars work differently than horses.

If “just predict the next word” is an accurate description of what LLMs are doing, it’s a demonstration of how surprisingly powerful predicting the next word can be.

Harms

I am concerned about the harms that AI can cause. Some people and organizations are focused on Asimov-style harms (will society collapse, will millions die?) and I am glad they are. But I’m more concerned with Dickens-style harms: people losing jobs not because AI can do their work, but because people in charge will think AI can do other people’s work. Harms due to people misunderstanding what AI does and doesn’t do well and misusing it.

I don’t see easy solutions to these problems. To go back to the car analogy: we’ve been a car society for about 120 years. For most of that time we’ve been leaning more and more towards cars. We are still trying to find the right balance, the right way to reduce the harm they cause while keeping the benefits they give us.

AI will be similar. The technology is not going to go away. We will not turn our back on it and put it back into the bottle. We’ll continue to work on improving how it works and how we work with it. There will be good and bad. The balance will depend on how well we collectively use it and educate each other, and how well we pay attention to what is happening.

Future

The pro-AI hype in the industry now is at a fever pitch, it’s completely overblown. But the anti-AI crowd also seems to be railing against it without a clear understanding of the current capabilities or the useful approaches.

I’m going to be using AI more, and learning where it works well and where it doesn’t.

Read the whole story
jgbishop
6 days ago
reply
Good read.
Durham, NC
Share this story
Delete
Next Page of Stories