Just because I don't care doesn't mean I don't understand.
646 stories
·
3 followers

Git: share a full repository as a file with git fast-export

1 Comment

Typically, we share repositories through a Git host, like GitHub, allowing others to clone the repository to get a copy. But sometimes that’s not an option, for example when trying to get someone started on your project when corporate onboarding processes take days to grant GitHub access.

In such cases, you can share a repository as a single file, including its entire history across all branches, using git fast-export. The receiver can unpack this file into a local repository using git fast-import. Let’s see how.

Export a repository

Use this command to export the current repository to a single gzipped file:

$ git fast-export --all | gzip > super-duper-project.gz

Replace super-duper-project.gz with the desired filename.

--all makes git fast-export export all branches and tags. Compressing with gzip can reduces the file size significantly because the output is text-based and contains a lot of repetition.

Import a repository file

As a receiver, you need to take a few steps to import the gzipped repository file.

First, create a new empty repository:

$ git init super-duper-project

$ cd super-duper-project

Second, import the gzipped file into the new repository:

$ gzip -dc ../super-duper-project.gz | git fast-import
fast-import statistics:
---------------------------------------------------------------------
Alloc'd objects:       5000
Total objects:         3124 (         5 duplicates                  )
      blobs  :         1394 (         0 duplicates        681 deltas of       1388 attempts)
      trees  :         1118 (         5 duplicates        980 deltas of       1090 attempts)
      commits:          611 (         0 duplicates          0 deltas of          0 attempts)
      tags   :            1 (         0 duplicates          0 deltas of          0 attempts)
Total branches:          38 (        37 loads     )
...
---------------------------------------------------------------------

Replace ../super-duper-project.gz with the path to the received file.

The output shows a bunch of statistics, some of which I snipped above. It isn’t particularly friendly, but it at least gives you an idea that the import worked, and contained a bunch of commits, branches, and tags.

Third, restore the working tree and staging area with:

$ git restore --staged --worktree .

This step is necessary because while git fast-import imports the repository history, it doesn’t check out files. Without this step, Git will detect all files as deleted:

$ git status
Found existing alias for "git status". You should use: "gst"

On branch main
Changes to be committed:
        deleted:    .editorconfig
        deleted:    .gitignore
        deleted:    CHANGELOG.rst
        ...

After running the git restore command, Git should report a clean status:

$ git status
On branch main
nothing to commit, working tree clean

You can now start exploring the repository as usual, such as with git log.

Configure a remote later

If you later gain access to the Git host, there’s no need to re-clone. You can add a remote to the existing repository with:

$ git remote add origin https://example.com/acme/super-duper-project.git

Replacing the URL with the actual URL of the remote repository.

Test that works with a fetch:

$ git fetch
remote: Enumerating objects: 14313, done.
remote: Counting objects: 100% (4127/4127), done.
...
From https://example.com/acme/super-duper-project
...

Then you can pull any newer changes to the main branch with:

$ git switch main

$ git pull --set-upstream origin main
From https://example.com/acme/super-duper-project
 * branch              main     -> FETCH_HEAD
Successfully rebased and updated refs/heads/mains.

Fin

May you true Git access be granted to you swiftly,

—Adam

Read the whole story
jgbishop
5 minutes ago
reply
Yet another thing I had no idea Git could do!
Durham, NC
Share this story
Delete

Brevity - 2025-07-14

1 Share
Brevity

Comic strip for 2025/07/14

Read the whole story
jgbishop
8 hours ago
reply
Durham, NC
Share this story
Delete

Study: 97% Of Average American’s Day Spent Retrieving 6-Digit Codes

1 Comment

CHICAGO—Shedding light on how technology increasingly shapes everyday life, a study published Thursday by the American Journal Of Sociology revealed that the average American dedicates 97% of their day to retrieving six-digit validation codes. “Our findings suggest that U.S. residents spend roughly 23 hours each day—or 160 hours every week—attempting to log in to online services, being told they need to check their phone for a six-digit code, and then entering that code into the website or app for verification,” said lead researcher Andrew Singh, adding that many Americans have to skip meals and forgo showering in order to find time to read and transfer over the hundreds of codes needed daily to access their medical records, work emails, and food delivery accounts. “There’s likely a link here between most Americans only getting 20 or 30 minutes of sleep each night and the amount of their lives now given over to frantically pressing ‘Resend’ after they fail to receive a particular code, then receiving far too many codes and being unable to figure out which one is still valid. Unfortunately, it seems this problem is only getting worse.” Singh added that his study did not even factor in the many hours Americans spend standing up from their laptop and walking over to their phone after remembering they left it in the other room.

The post Study: 97% Of Average American’s Day Spent Retrieving 6-Digit Codes appeared first on The Onion.

Read the whole story
jgbishop
5 days ago
reply
Ha!
Durham, NC
Share this story
Delete

That $20 dress direct from China now costs $30 after Trump closed a tariff loophole – and the US will soon end the ‘de minimis’ exemption for the rest of the world, too

1 Comment

Fast fashion got a lot pricier for Americans this spring – and it’ll likely get even more expensive in 2027.

That’s because the Trump administration has been rolling back a little-known feature of U.S. customs law that for years had allowed retailers to ship packages duty-free to U.S. shoppers – as long as each shipment was valued under US$800. Known as the “de minimis” exception, this rule had helped keep prices low on Chinese e-commerce platforms such as Shein and Temu, boosting their popularity with American shoppers.

But as of May 2, 2025, that advantage disappeared – at least for China and Hong Kong. That’s when the U.S. officially eliminated the exemption for low-priced imports from those places. Suddenly, cheap fashion wasn’t so cheap anymore – and demand for Shein and Temu plummeted.

But while bargain hunters might hope for workarounds by ordering from platforms based in Vietnam or elsewhere, that’s a temporary fix. The exemption is set to be eliminated for all countries in 2027, thanks to language in the tax and spending bill just signed into law.

But hold up – what’s “de minimis,” anyway?

Cheap dresses and ‘petty matters’

I’m a professsor of marketing who’s long been interested in this loophole. De minimis is short for de minimis non curat lex, which means, “The law does not concern itself with petty matters.” In trade terms, the de minimis exemption refers to a value threshold below which imports can enter a country without duties. Imagine the government saying, “It’s so cheap we won’t even bother with it.”

The de minimis exemption was introduced as part of the Tariff Act of 1930 and was initially set at $200. It stayed at that level until 2016, when the U.S. bumped it up to $800. Raising the limit helped small companies as well as individual shoppers, and from 2016 to 2023 de minimis shipments skyrocketed – rising sixfold to more than 1 billion annually.

But it left large companies, which import items in bulk, at a disadvantage. That’s one reason why, historically, the same dress might cost more money in a U.S. retail store than it would if you bought it online from an e-commerce company.

A case study: Your $20 Shein dress

Imagine it’s January 2025. You’re scrolling Shein, and you spot a trendy dress priced at $20. You order the dress to be delivered to your home. The seller packs your dress and exports it to your home address. The package arrives at the U.S. border. Because the package’s “value” – specifically, the price you paid – is below the U.S. “de minimis” threshold of $800, the U.S. Customs and Border Protection exempts the importer – that is, you – from paying any import duty. You pay just $20.

Now imagine you’re trying to order the same dress in mid-July.

Executive Order 14256, issued on April 2, states that such an item, if shipped via international mail from China or Hong Kong, could be subject to an ad valorem duty of up to 20% of the item’s value, or a specific dollar amount per package, which could be $100 or more. This was increased to 30% on April 8 and 84% the following day. In the most recent modification, dated May 12, the percentage has been revised to 54%.

So, using the 54% ad valorem duty as an example, the import tariff on your $20 dress would be $10.80 – costing you $30.80 in all. Of course, the May 12 modification comes with the usual disclaimer: It will stay in effect “unless and until otherwise modified by a subsequent executive action.”

For millions of American shoppers, this is a wake-up call: Formerly tax-free fast fashion is now significantly more expensive. Thrifty shoppers might be tempted to buy from sellers in India or Mexico, where the de minimis exemption is still in effect — at least for now. The One Big Beautiful Bill Act ends the de minimis exemption globally starting July 1, 2027.

Trade policy has been unpredictable under President Donald Trump, and the de minimis rule has been no exception. But with the global end of the exemption now written into law, its future seems a little more certain. Although it’s always wise to watch for new developments from the White House, I suspect the U.S. de minimis exemption may soon be a thing of the past.

The Conversation

Vivek Astvansh does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Read the whole story
jgbishop
5 days ago
reply
Hard to know whether this is "good" or "bad." An interesting tweak to the law, nonetheless.
Durham, NC
Share this story
Delete

Using AI Right Now: A Quick Guide

1 Comment

Every few months I put together a guide on which AI system to use. Since I last wrote my guide, however, there has been a subtle but important shift in how the major AI products work. Increasingly, it isn't about the best model, it is about the best overall system for most people. The good news is that picking an AI is easier than ever and you have three excellent choices. The challenge is that these systems are getting really complex to understand. I am going to try and help a bit with both.

First, the easy stuff.

Which AI to Use

For most people who want to use AI seriously, you should pick one of three systems: Claude from Anthropic, Google’s Gemini, and OpenAI’s ChatGPT. With all of the options, you get access to both advanced and fast models, a voice mode, the ability to see images and documents, the ability to execute code, good mobile apps, the ability to create images and video (Claude lacks here, however), and the ability to do Deep Research. Some of these features are free, but you are generally going to need to pay $20/month to get access to the full set of features you need. I will try to give you some reasons to pick one model or another as we go along, but you can’t go wrong with any of them.

What about everyone else? I am not going to cover specialized AI tools (some people love Perplexity for search, Manus is a great agent, etc.) but there are a few other options for general purpose AI systems: Grok by Elon Musk’s xAI is good if you are a big X user, though the company has not been very transparent about how its AI operates. Microsoft’s Copilot offers many of the features of ChatGPT and is accessible to users through Windows, but it can be hard to control what models you are using and when. DeepSeek r1, a Chinese model, is very capable and free to use, but is missing a few features from the other companies and it is not clear that they will keep up in the long term. So, for most people, just stick with Gemini, Claude, or ChatGPT

Great! This was the shortest recommendation post yet! Except… picking a system is just the beginning. The real challenge is understanding how to use these increasingly complex tools effectively.

Now what?

I spend a lot of time with people trying to use AI to get stuff done, and that has taught me how incredibly confusing this is. So I wanted to walk everyone through the most important features and choices, as well as some advice on how to actually use AI.

Picking a Model

ChatGPT, Claude, and Gemini each offer multiple AI models through their interface, and picking the right one is crucial. Think of it like choosing between a sports car and a pickup truck; both are vehicles, but you'd use them for very different tasks. Each system offers three tiers: a fast model for casual chat (Claude Sonnet, GPT-4o, Gemini Flash), a powerful model for serious work (Claude Opus, o3, Gemini Pro), and sometimes an ultra-powerful model for the hardest problems (o3-pro, which can take 20+ minutes to think). The casual models are fine for brainstorming or quick questions. But for anything high stakes (analysis, writing, research, coding) usually switch to the powerful model.

Most systems default to the fast model to save computing power, so you need to manually switch using the model selector dropdown. (The free versions of these systems do not give you access to the most powerful model, so if you do not see the options I describe, it is because you are using the free version)

I use o3, Claude 4 Opus, and Gemini 2.5 Pro for any serious work that I do. I also have particular favorites based on individual tasks that are outside of these models (GPT-4.5 is a really interesting model for writing, for example), but for most people, stick with the models I suggested most of the time.

For people concerned about privacy, Claude does not train future AI models on your data, but Gemini and ChatGPT might, if you are not using a corporate or educational version of the system. If you want to make sure your data is never used to train an AI model, you can turn off training features easily for ChatGPT without losing any functionality, and at the cost of some functionality for Gemini. You may also want to turn on or off “memory” in ChatGPT’s personalization option, which lets the AI remember scattered details about you. I find the memory system to be too erratic at this point, but you may have a different experience.

Using Deep Research

Deep Research is a key AI feature for most people, even if they don’t know it yet. Deep Research tools are very useful because they can produce very high-quality reports that often impress information professionals (lawyers, accountants, consultants, market researchers) that I speak to. You should be trying out Deep Research reports in your area of expertise to see what they can do for you, but some other use cases include:

  • Gift Guides: “what do I buy for a picky 11-year-old who has read all of Harry Potter, is interested in science museums, and loves chess? Give me options, including where to buy at the best prices.”

  • Travel Guides “I am going to Wisconsin on vacation and want to visit unique sites, especially focusing on cheese, produce a guide for me”

  • Second opinions in law, medicine, and other fields (it should go without saying that you should trust your doctor/lawyer above AI, but research keeps finding that the more advanced AI systems do very well in diagnosis with a surprisingly low hallucination rate, so they can be useful for second opinions).

Activating Deep Research

Deep Research reports are not error-free but are far more accurate than just asking the AI for something, and the citations tend to actually be correct. Also note that each of the Deep Research tools work a little differently, with different strengths and weaknesses. Turning on the web search option in Claude and o3 will get them to work as mini Deep Research tools, doing some web research, but not as elaborately as a full report. Google has some fun additional options once you have created a report, letting you turn it into an infographic, a quiz or a podcast.

An Easy Approach to AI: Voice Mode

An easy way to use AI is just to start with voice mode. The two best implementations of voice mode are in the Gemini app and ChatGPT’s app and website. Claude’s voice mode is weaker than the other two systems. What makes voice mode great is that you can just have a natural conversation with the app while in the car or on a walk and get quite far in understanding what these models can do. Note the models are optimized for chat (including all of the small pauses and intakes of breath designed to make it feel like you are talking to a person), so you don’t get access to the more powerful models this way. They also don’t search the web as often which makes them more likely to hallucinate if you are asking factual questions: if you are using ChatGPT, unless you hear the clicking sound at 44 seconds into this clip, it isn’t actually searching the web.

Voice mode's killer feature isn't the natural conversation, though, it's the ability to share your screen or camera. Point your phone at a broken appliance, a math problem, a recipe you're following, or a sign in a foreign language. The AI sees what you see and responds in real-time. I've used it to identify plants on hikes, solve a problem on my screen, and get cooking tips while my hands were covered in flour. This multimodal capability is genuinely futuristic, yet most people just use voice mode like Siri. You're missing the best part.

Making Things for You: Images, Video, Code, and Documents

ChatGPT and Gemini will make images for you if you ask (Claude cannot). ChatGPT offers the most controllable image creation tool, Gemini uses two different image generation tools, Imagen, a very good traditional image generation system, and a multimodal image generation system. Generally, ChatGPT is stronger. On video creation, however, Gemini’s Veo 3 is very impressive, and you get several free uses a day (but you need to hit the Video button in the interface)

“make me a photo of an otter holding a sign saying otters are cool but also accomplished pilots. the otter should also be holding a tiny silver 747 with gold detailing.”

All three systems can produce a wide variety of other outputs, ranging from documents to statistical analyses to interactive tools to simulations to simple games. To get Gemini or ChatGPT to do this reliably, you need to select the Canvas option when you want these systems to run code or produce separate outputs. Claude is good at creating these sorts of outputs on its own. Just ask, you may be surprised what the AI systems can make.

Working with an AI

Now that you have picked a model, you can start chatting with it. It used to be that the details of your prompts mattered a lot, but the most recent AI models I suggested can often figure out what you want without the need for complex prompts. As a result, many of the tips and tricks you see online for prompting are no longer as important for most people. At the Generative AI Lab at Wharton, we have been trying to examine prompting techniques in a scientific manner, and our research has shown, for example, that being polite to AI doesn’t seem to make a big difference in output quality overall1. So just approach the AI conversationally rather than getting too worried about saying exactly the right thing.

That doesn’t mean that there is no art to prompting. If you are building a prompt for other people to use, it can take real skill to build something that works repeatedly. But for most people you can get started by keeping just a few things in mind:

  • Give the AI context to work with. Most AI models only know basic user information and the information in the current chat, they do not remember or learn about you beyond that. So you need to provide the AI with context: documents, images, PowerPoints, or even just an introductory paragraph about yourself can help - use the file option to upload files and images whenever you need. The AIs can do some of these ChatGPT and Claude can access your files and mailbox if you let them, and Gemini can access your Gmail, so you can ask them to look up relevant context automatically as well, though I prefer to give the context manually.

  • Be really clear about what you want. Don’t say “Write me a marketing email,” instead go with “I'm launching a B2B SaaS product for small law firms. Write a cold outreach email that addresses their specific pain points around document management. Here's the details of the product: [paste]” Or ask the AI to ask you questions to help you clarify what you want.

  • Give it step-by-step directions. Our research found this approach, called Chain-of-Thought prompting, no longer improves answer quality as much as it used to. But even if it doesn’t help that much, it can make it easier to figure out why the AI came up with a particular answer.

  • Ask for a lot of things. The AI doesn’t get tired or resentful. Ask for 50 ideas instead of 10, or thirty options to improve a sentence. Then push the AI to expand on the things you like.

  • Use branching to explore alternatives. Claude, ChatGPT, and Gemini all let you edit prompts after you have gotten an answer. This creates a new “branch” of the conversation. You can move between branches by using the arrows that appear after you have edited an answer. It is a good way to learn how your prompts impact the conversation.

Troubleshooting

I also have seen some fairly common areas where people get into trouble:

  • Hallucinations: In some ways, hallucinations are far less of a concern than they used to be, as AI has improved and newer AI models are better at not hallucinating. However, no matter how good the AI is, it will still make errors and mistakes and still give you confident answers where it is wrong. They also can hallucinate about their own capabilities and actions. Answers are more likely to be right when they come from the bigger, slower models, and if the AI did web searches. The risk of hallucination is why I always recommend using AI for topics you understand until you have a sense for their capabilities and issues.

  • Not Magic: You should remember that the best AIs can perform at the level of a very smart person on some tasks, but current models cannot provide miraculous insights beyond human understanding. If the AI seems like it did something truly impossible, it is probably not actually doing that thing but pretending it did. Similarly, AI can seem incredibly insightful when asked about personal issues, but you should always take these insights with a grain of salt.

  • Two Way Conversation: You want to engage the AI in a back-and-forth interaction. Don’t just ask for a response, push the AI and question it.

  • Checking for Errors: The AI doesn’t know “why” it did something, so asking it to explain its logic will not get you anywhere. However, if you find issues, the thinking trace of AI models can be helpful. If you click “show thinking” you can find out what the model was doing before giving you an answer. This is not always 100% accurate (you are actually getting a summary of the thinking) but is a good place to start.

Your Next Hour

So now you know where to start. First, pick a system and resign yourself to paying the $20 (the free versions are demos, not tools). Then immediately test three things on real work: First, switch to the powerful model and give it a complex challenge from your actual job with full context and have an interactive back and forth discussion. Ask it for a specific output like a document or program or diagram and ask for changes until you get a result you are happy with. Second, try Deep Research on a question where you need comprehensive information, maybe competitive analysis, gift ideas for someone specific, or a technical deep dive. Third, experiment with voice mode while doing something else — cooking, walking, commuting — and see how it changes your ability to think through problems.

Most people use AI like Google at first: quick questions, no context, default settings. You now know better. Give it documents to analyze, ask for exhaustive options, use branching to explore alternatives, experiment with different outcomes. The difference between casual users and power users isn't prompting skill (that comes with experience); it's knowing these features exist and using them on real work.

Subscribe now

Share

1

It is actually weirder than that: on hard math and science questions that we tested, being polite sometimes makes the AI perform much better, sometimes worse, in ways that are impossible to know in advance. So be polite if you want to!

Read the whole story
jgbishop
22 days ago
reply
Good tips!
Durham, NC
Share this story
Delete

This ShowerClear Design Fixes the Mold Problem All Showerheads Have

1 Comment

There is an inherent problem with the design of shower heads. Not some of them, all of them. The problem is that their very design creates the ideal circumstances for mold to thrive within them, internally, in areas that you cannot access for cleaning.

A bathtub faucet or kitchen sink tap is simply just a shaped pipe that allows water to flow through them. When you turn the water off, the pipe mouths quickly dry, thanks to their relatively wide shape and local airflow.

Showerheads, however, are complex workings of intricate inner channels and nozzles, designed to break the water flow into spray patterns that end users find desirable.

These channels are all inside the showerhead and get little airflow.

The channels can never really dry out completely, and over time, that interal dampness allows bacteria and mold—including the dreaded black mold--to thrive. In this shot of a showerhead that has been cut open by a saw, a lot of what you see is the detritus of the cut plastic, but you can also see the brown stuff.

And deeper inside the head, you find this:

The mother of Steve Sunshine, an inventor, was suffering from respiratory issues. Sunshine disassembled her showerhead and found it was filled with mold. He subsequently designed this ShowerClear:

This ingenious design pops open, so that after a shower you can let the shower head's innards dry out. It also makes it easy to clean, so you can eliminate mineral build-up. (This eliminates the mild hassle that many of us undertake to clean our showerheads, which is soaking them in a vessel filled with vinegar for a few hours.)

The ShowerClear heads come in a variety of finishes and run $140.




Read the whole story
jgbishop
31 days ago
reply
This is genius, but it needs to come on a wand.
Durham, NC
Share this story
Delete
Next Page of Stories