Category: Tech & Data

AI, marketing, projects, app recommendations

  • My Brother Is Building a Data Platform for Fishermen, and It’s Exposing How Broken the Industry Really Is

    My brother is working on test data scraping for New England fisheries. I’m watching from the sidelines, and what I’ve discovered has completely changed how I see the commercial fishing industry. The verdict: The small-boat fishing industry is fundamentally broken, crushed by data asymmetry that’s forcing family-owned vessels out of business while industrial fleets dominate. But the research I’ve uncovered is also thrilling because it reveals domain-specific knowledge you simply can’t find anywhere else without doing the hard work of talking to fishermen, reading regulatory testimony, and digging through auction data.

    What started as curiosity about my brother’s project turned into a deep dive that exposed something darker: small fishing boats in New England are dying not because of bad seamanship or declining fish stocks, but because they don’t have access to the right information at the right time. And that information gap is widening every year.

    The Data Wall That’s Sinking Small Boats

    The commercial fishing landscape in New England has transformed from a “hunting” industry into a data-managed resource sector. For family-owned vessels under fifty feet, the traditional measures of success no longer matter. Physical prowess, mechanical reliability, decades of experience: none of it counts if you don’t have high-velocity, localized data.

    The problem is straightforward and brutal. Industrial-scale fleets have extensive administrative support to navigate complex catch-share systems and quota markets. Small outfits rely on anecdotal reports or delayed scientific assessments that don’t reflect real-time conditions on the water. This isn’t just a competitive disadvantage. It’s a systemic barrier that’s accelerating fleet consolidation into major ports like New Bedford and Gloucester, while traditional landing sites in eastern Maine and Rhode Island wither.

    The core issue: Data has become the primary currency of maritime operations, and small boats can’t afford to buy in.

    One Hundred Pain Points That Define the Crisis

    The research I found breaks down into one hundred distinct operational, regulatory, and economic friction points. These aren’t abstract policy concerns. They’re daily problems that determine whether a fishing trip makes money or loses it.

    Environmental Blindness Is Costing Boats Money

    Small-boat operators need granular, site-specific data to anticipate environmental shifts before they manifest as regulatory closures or gear failures. The Gulf of Maine is changing at a rate that exceeds traditional triennial surveys. Fishermen are seeing it happen in real time, but the data systems lag years behind.

    Here’s what they actually need:

    • Weekly Paralytic Shellfish Poisoning forecasts for specific harvest sites, not regional averages that arrive too late to prevent closures
    • Real-time subsurface temperature data at specific gear depths to track the movement of black sea bass and Atlantic cod, which are shifting north faster than quota systems can adapt
    • Harmful Algal Bloom trajectory modeling that gives advance warning of rapid harvesting closures instead of emergency alerts after the damage is done
    • River herring avoidance map updates to prevent midwater trawl bycatch that triggers slippage penalties and early season closures
    • Whale migration and acoustic detection alerts for compliance with seasonal speed and gear restrictions that can shut down entire fishing areas

    The biological baseline is shifting. Black sea bass are expanding north into Southern New England waters where vessels encounter high volumes of fish but lack sufficient quota, leading to massive discard problems. If real-time range data were integrated into management models, boats could adjust. Instead, they’re fishing blind and getting penalized for it.

    Market Opacity Is Stealing Revenue

    Small-boat operators are price-takers in a market where information is siloed within vertically integrated firms or opaque auction houses. The “derived demand” from retail markets in Boston and New York doesn’t communicate efficiently to the port level, causing significant revenue loss.

    The price spread between size grades is often more dramatic than species-level fluctuation. U-10 scallops command a premium of over fifteen dollars per pound compared to 30-40 count scallops. A small boat focusing on quality over quantity and landing high-count, hand-handled product can theoretically out-earn a larger vessel. But only if they know in real time which auction is currently paying that premium.

    Current market failures:

    • No live price feeds from major display auctions in New Bedford and Gloucester, forcing captains to guess where to land their catch
    • Opaque quota lease pricing within sectors, with trades facilitated by sector managers without public price reporting
    • No visibility into daily landing volumes at local ports, leading to market gluts that depress prices for everyone who shows up that day
    • Vertical integration that eliminates independent buyer competition at certain ports, giving captains no negotiating leverage
    • Global aquaculture price pressure from salmon and shrimp that undercuts wild-caught pricing, but with no advance warning of market shifts

    Quota markets are particularly broken. Trades happen without public price reporting, creating “informational disconnects” between the managerial class and rank-and-file fishers. Small outfits lack the scale to absorb these inefficiencies.

    Amendment 23 Is an Existential Threat

    The introduction of Amendment 23 and the push toward 100 percent monitoring coverage represents the most significant regulatory hurdle for small-boat owners in decades. The shift toward industry-funded monitoring places a direct financial tax on every trip that often exceeds the profit margin.

    The math is brutal: Human at-sea monitors cost between seven hundred and twelve hundred dollars per vessel per day. For a small vessel landing three thousand dollars worth of fish in a trip, a twelve-hundred-dollar monitor fee represents a forty percent tax on gross revenue. Electronic monitoring hardware costs between thirty-five hundred and five thousand dollars upfront, plus ongoing maintenance and data review fees that can reach forty-eight hundred dollars annually.

    The regulatory burden includes:

    • Daily availability tracking for at-sea monitors and observers, with trip cancellations when observer shortage hits
    • Redundant logbook reporting that owner-operators must complete after exhausting shifts at sea
    • Slippage reporting requirements for discards that can trigger violations during at-sea sampling
    • Real-time tracking of Acceptable Biological Catch caps to avoid sudden fishery closures mid-trip
    • Ropeless gear technology mandates that require expensive on-demand systems with uncertain reliability data

    The narrative surrounding Amendment 23 reveals an industry in crisis. The requirement for one hundred percent monitoring for groundfish trips is viewed as an existential threat. A platform providing real-time updates on monitoring waivers or current Electronic Monitoring system prices would serve as a critical financial tool for vessels struggling to stay afloat.

    The Statistical Reality of Consolidation

    The quantitative data from New England fisheries shows a sector under extreme pressure. The number of boats landing groundfish in Maine has plummeted by forty percent in recent years, while New England as a whole has seen a twenty-six percent decline in active groundfish vessels.

    Large vessels over seventy-five feet have seen revenue increase significantly, concentrating operations in Gloucester and New Bedford. Medium vessels in the fifty to seventy-five foot range are up about five percent on average, consolidating in Massachusetts hubs. Small vessels under fifty feet? Revenue is down significantly, with a forty percent reduction in active permits.

    This consolidation is driven by sector allocations. Small boats with low catch history find it more profitable to lease their quota to larger vessels than to fish it themselves. This creates a welfare program for inactive permit holders while stripping the working waterfront of active vessels.

    Auction Price Data Reveals the Opportunity Cost

    Daily price reports from New Bedford and Gloucester auctions show massive volatility across different size grades of the same species. When I looked at actual auction data, the spreads were shocking:

    • Scallops ranging from eleven dollars per pound for 30-40 count Closed Area I product to twenty-eight dollars per pound for U-10 Channel scallops
    • Market cod at two dollars and sixty cents per pound while halibut fetches six dollars and fifty-three cents
    • Large monkfish tails at two dollars per pound, with premium pricing available for smaller, higher-quality cuts

    Without real-time access to this pricing data, small boats are making million-dollar decisions based on outdated information or rumors from other captains.

    The Technology Gap Is About Last-Mile Delivery

    The technical challenge isn’t a lack of data. NERACOOS ocean observing buoys send hourly updates on over fifty variables, including wave frequency and subsurface temperature. Federal Register updates track regulatory changes daily. Auction houses process thousands of pounds of seafood with documented pricing.

    The gap is one of responsiveness. Small boats need a system that can take a hundred-million-observation dataset like NERACOOS and distill it into a moment’s notice weather alert tailored for a forty-foot dragger. The current infrastructure is robust in data volume but completely lacks the last-mile delivery system that puts information in the hands of the harvester when it matters most.

    Critical technology failures:

    • No real-time buoy data accessibility with user-friendly hourly updates for wave frequency and storm surge prediction
    • Data siloing between NOAA and state agencies like Maine Department of Marine Resources, preventing a unified view of the regional coastline
    • Poor UI and UX in existing electronic vessel trip reports, forcing tired crews to spend extra time on data entry after exhausting shifts
    • No interoperability between vessel GPS and management maps, leading to accidental incursions into closed areas and expensive violations
    • High cost of satellite communication uptime offshore, making real-time reporting and safety systems financially unfeasible

    Technologically, small boats need a system that recognizes their specific constraints. A forty-foot boat in rough seas can’t navigate complex software interfaces or wait for slow data loads. The platform must deliver instant, actionable intelligence or it’s useless.

    Safety Data Nobody Talks About

    Commercial fishing remains the most dangerous occupation in the United States, with a fatal injury rate of two hundred per one hundred thousand workers. That’s compared to 3.3 per one hundred thousand for the general workforce. Vessels are lost at an average rate of ninety per year in New England alone.

    For small-boat outfits, safety is fundamentally a data problem: knowing when to stay in port and how to handle emergencies when offshore. The qualitative evidence suggests that “never one thing sinks a boat.” It’s always a series of events, starting with poor data on weather or vessel stability.

    The social dimension is often overlooked. Chronic pain and injury impact harvester health over decades. Substance use and drug-related problems plague fishing ports. COVID-19 infection control became a labor access crisis on small vessels. Mental health stressors from regulatory pressure and economic uncertainty compound the physical dangers.

    What fishermen actually need for safety:

    • Micro-climate weather alerts for localized nor’easters that affect small vessels differently than large boats
    • Real-time wave frequency data to make go or no-go decisions before leaving the dock
    • Crew manifest and terms of employment templates for legal protection in injury cases
    • Fatigue management tools to track sleep deprivation and reduce human error

    Small outfits need a platform that integrates safety checklists, weather alerts, and labor manifests into their daily workflow, reinforcing a culture of integrated preparedness.

    The Psychology of Distrust

    There’s a profound credibility gap between federal scientific models and the small-boat fleet. Fishermen witness species range shifts years before they appear in quota allocations. They see black sea bass moving north in massive numbers while their quotas remain based on outdated catch history from decades ago.

    Qualitative testimony from public hearings reveals a pervasive belief that NOAA scientists “ignore data to fit their own models.” Captains observe what they call “carnage” on their decks, massive bycatch of species that aren’t supposed to be in those waters according to federal assessments.

    This distrust extends to Electronic Monitoring systems. While EM is technically more efficient than human observers, fishermen fear video data will be used for “gotcha” enforcement rather than scientific catch accounting. The platform my brother is researching needs to address this head-on by allowing fishermen to see their own data and understand how it’s being audited. If the system provides a pre-audit check that helps a captain avoid a slippage violation before official data is uploaded, it transitions from a surveillance tool to a protective asset.

    What the Optimal Platform Actually Needs

    Based on the hundred pain points and the statistical landscape, the platform must be built on a multi-layered data engine addressing three pillars of small-boat survival: safety, profit, and compliance.

    The scraped data architecture should cover five clusters:

    Physical oceanography: NERACOOS ERDDAP servers, NDBC buoys, NECOFS models updated hourly for life-safety decisions and species movement prediction via thermal niches.

    Market pricing: BASE New Bedford auction, CASE Gloucester auction, Whaling City portal with real-time updates during auction hours to prevent revenue loss from informational disconnect.

    Regulatory status: Federal Register Amendment 23 updates, NEFMC meeting logs, GARFO notices updated daily to avoid closures and track quota lease prices.

    Biological threats: Maine DMR PSP forecasts, green crab demographics, HAB trajectories updated weekly to manage aquaculture risk and predict future yield.

    Infrastructure and logistics: Municipal GIS portals, working waterfront inventories, fuel price boards updated weekly to navigate port-side services and defend against gentrification.

    The physical oceanography layer is particularly critical. Each buoy in the NERACOOS network sends hourly updates on over fifty variables. For a small boat, knowing that a subsurface thermal front has moved five miles east can mean the difference between a skunked trip and a full hold.

    Three Tools That Would Change Everything

    The platform can’t be a collection of PDFs. It must be a structured, API-driven dashboard with specific tools:

    Market Delta Tool: Calculates steaming cost versus price delta. If New Bedford is paying fifty cents per pound more for cod than Gloucester, but the vessel is currently in Salem, the app calculates whether the extra fuel and time justify the price gain. This turns a gut-feeling decision into a math problem.

    Monitoring Cost Calculator: Tracks the monitor waiver status for specific sectors. When the GARFO regional administrator issues a waiver due to lack of funding, vessels need to know instantly so they can fish without a twelve-hundred-dollar-per-day monitor.

    Bycatch Avoidance Map: Scraped real-time data from the River Herring Avoidance Program showing hotspots of bycatch that could trigger early season closures for the entire fleet. Small boats can route around problem areas and protect both their season and the broader fishery.

    The Future Is About Responsiveness

    The future of New England fisheries will be defined by responsiveness to environmental change. The Blue Economy is shifting from wild-harvest dominance to a hybrid model including offshore wind, increased aquaculture, and changes in shipping traffic due to reduced high-latitude ice.

    Small outfits that fail to adopt a data-driven approach will be consolidated out of existence. The economic forces are too strong and the regulatory burden too heavy for traditional methods to survive. But the data-scraped platform provides a technical equalizer. By democratizing access to live auction prices, quota markets, and subsurface oceanography, the platform allows a forty-foot boat to operate with the same informational sophistication as a hundred-foot industrial trawler.

    The ultimate utility lies in supporting integrated preparedness. Whether preparing for a nor’easter, a sudden PSP closure, or the next regulatory shift in Amendment 23, the platform serves as digital mettle that allows the independent fisherman to face demanding situations in a spirited and resilient way.

    The Feeling

    There’s something both devastating and energizing about uncovering this information. Watching my brother dig into the specifics of New England fisheries data scraping, I expected technical challenges and maybe some interesting industry quirks. What I found instead was a entire economic class being systematically squeezed out of existence by information inequality.

    The research has a weight to it. Reading through testimony from public hearings where third-generation fishermen describe choosing between paying for a monitor or making their mortgage payment feels like witnessing something important collapse in slow motion. These aren’t abstract policy debates. They’re families losing livelihoods that stretch back centuries because they can’t afford a twelve-hundred-dollar surveillance tax on every fishing trip.

    But there’s also an undercurrent of excitement running through all of this. This is domain-specific knowledge that’s incredibly hard to access without doing the actual work. You can’t Google “why are small fishing boats failing” and get the real answer. You have to read through fishery management plan amendments, dig through auction price data from obscure portals, understand the difference between Potential Sector Contributions and Acceptable Biological Catch caps, and synthesize qualitative testimony with quantitative trends.

    The density of information is thrilling in a nerdy way. Learning that U-10 scallops command fifteen dollars per pound more than 30-40 count scallops, or that NERACOOS buoys send hourly updates on over fifty variables, or that black sea bass are moving north faster than federal quota models can track—each of these details builds into a comprehensive picture that very few people outside the industry ever see.

    There’s frustration, too. The solutions feel obvious once you understand the problem. Build a dashboard that scrapes auction data and tells captains where to land their catch. Aggregate subsurface temperature readings and predict species movement. Create a monitoring cost calculator that tracks waiver status in real time. These aren’t moonshot ideas. They’re straightforward technical implementations that could genuinely save small fishing operations.

    The emotional core is this: I’m watching an entire industry structure fail not because the work is impossible or the resource is depleted, but because information flows to the wrong people at the wrong speed. The big boats have administrative teams. The small boats have WhatsApp groups and rumors. And that gap, more than any environmental change or regulatory burden, is what’s killing the New England small-boat fleet.

    My brother’s project feels important now in a way it didn’t when he first described it. It’s not just test data scraping. It’s building the infrastructure that could determine whether family-owned fishing vessels survive the next decade. And the fact that this level of detailed, actionable knowledge exists but isn’t readily available anywhere else makes the whole thing feel urgent and valuable and a little bit secret—like we’ve stumbled into understanding something that matters more than most people realize.

  • How I Use AI Research Tools to Solve Everyday Problems (Like Finally Nailing the Perfect Chicken Temp)

    I was cooking chicken breast the other day and hit a familiar wall. What temperature should I actually pull it from the oven? I’ve been doing 152°F for years, but I realized I’d never properly validated that number. I didn’t want to just Google it and land on one random blog post. I didn’t want to just ask ChatGPT and get a single synthesized answer. I wanted something statistically significant.

    My conclusion: Google Gemini Deep Research is now my go-to tool for this kind of problem. It aggregated 50 real-world temperature recommendations from cooking forums, ran the stats, and confirmed that the median preferred temp is 155°F. I was at 152°F, so I’m pretty much nailed it. More importantly, the process gave me a repeatable framework for solving problems where I need distributed consensus, not just expert opinion.

    The Core Problem: One Source Isn’t Enough Anymore

    Here’s how I used to approach questions like this. I’d search “chicken breast internal temp,” click the first article, and either trust it or feel vaguely unsatisfied. Maybe I’d check a second source if the first one seemed sketchy. But that’s not actually research. That’s just confirmation bias with extra steps.

    The issue is that most everyday problems don’t have a single authoritative answer. Chicken temp is a perfect example. The USDA says 165°F. But if you’ve ever cooked to that number, you know the result: dry, chalky, borderline cardboard. So clearly, the federal guideline isn’t optimized for quality. It’s optimized for liability.

    What I actually wanted was to know what people who care about this are doing. Not one person. Not one chef. I wanted the wisdom of the crowd, filtered through people who’ve debugged this exact problem hundreds of times.

    Enter AI Research Tools

    This is where tools like Google Gemini Deep Research, Perplexity, and ChatGPT’s research mode shine. I don’t use them to get a single answer. I use them to do the boring, repetitive work of aggregating dozens of sources and finding patterns.

    Here’s what I asked Gemini: “Pull 50 different instances of chicken breast cooking temperature recommendations from forums around the web and create a statistical analysis of where they fall.”

    That’s it. No complicated prompt engineering. Just a clear ask for breadth and rigor.

    What Gemini Actually Did

    Gemini went out and scraped recommendations from Reddit’s cooking communities, eGullet, Chowhound, and other forums. It pulled 50 independent data points from real users who’d posted their preferred internal temps. Then it standardized the data (turning ranges like “150-155°F” into medians), calculated descriptive stats, and identified clusters.

    The output was genuinely impressive. It gave me:

    • Median temp: 155°F
    • Mode: 155°F (appeared 15 times)
    • Mean: 156.4°F
    • Range: 140°F to 168°F
    • Standard deviation: ~5.8°F

    It also broke the data into philosophical clusters. There’s the “USDA Loyalists” who stick to 165°F (18% of the sample). There’s the “Sweet Spot Consensus” at 155°F (42% of the sample). And there’s a smaller group of “Texture Technicians” who pull at 150°F and rely heavily on carryover cooking.

    One friction point: Gemini’s output was extremely verbose. The report it generated was over 6,000 words, written in this pseudo-academic tone with headings like “The Thermal Dialectic” and references to the chicken’s scientific name (Gallus gallus domesticus). I didn’t ask for that. I just wanted the stats. But I can skim, so it wasn’t a dealbreaker.

    Why This Approach Works

    The reason this method is so effective is that it bypasses two common failure modes:

    1. Authority bias: When you ask a single expert or AI model, you get one perspective shaped by that source’s training data or personal experience. That might be fine for settled science, but for subjective, context-dependent questions (like how to cook meat), you need variance.
    2. Anecdata: When you ask friends or scroll through a single Reddit thread, you’re sampling a tiny, non-representative slice of opinion. You might land on the one guy who insists 145°F is fine, or the one who burns everything to 175°F out of paranoia.

    By forcing the AI to aggregate 50 independent sources and run the numbers, I’m effectively crowdsourcing the answer. I’m not trusting Gemini’s opinion. I’m trusting the collective trial-and-error of hundreds of home cooks who’ve debugged this problem in their own kitchens.

    The Practical Takeaway on Chicken

    The actual finding confirmed what I’d been doing intuitively. The forum consensus is clear: pull chicken breast at 155°F, rest it for 5-10 minutes, and it’ll coast up to 160-162°F via carryover heat. That’s hot enough to pasteurize any pathogens (you only need 50 seconds at 155°F for a 7-log reduction of Salmonella), but cool enough that you don’t squeeze all the moisture out of the muscle fibers.

    The 165°F USDA recommendation is designed for instant lethality. You hit that temp, bacteria die in under 10 seconds, and you’re safe. But you don’t need instant lethality if you’re willing to hold the meat at a lower temp for a minute or two. That’s the insight the crowd figured out, and it’s the insight the federal guidelines ignore because they’re optimized for worst-case scenarios and institutional liability.

    One specific friction point even in the 155°F camp: you absolutely need a reliable instant-read thermometer. You can’t guess this by feel or time. I use a cheap digital probe, and it’s the single most important tool in my kitchen for this reason.

    How You Can Repeat This Process

    This framework works for any problem where you need distributed consensus:

    • Step 1: Identify a question where expert opinion might be too narrow or outdated, but where lots of people have real-world experience. Examples: best running shoes for flat feet, how to structure a cold email, what time to arrive at TSA for a specific airport.
    • Step 2: Ask an AI research tool (Gemini Deep Research, Perplexity, ChatGPT) to pull a specific number of sources. I like 50 because it’s large enough to smooth out outliers but small enough that the AI won’t hallucinate or pad the data.
    • Step 3: Request a statistical breakdown. Don’t just ask for a summary. Ask for the median, mode, range, and clusters. This forces the AI to show its work and gives you a sense of how much agreement actually exists.
    • Step 4: Skim the output for the “why.” The stats tell you what people do. The forum quotes and explanations tell you why they do it. That context helps you decide if their reasoning applies to your situation.
    • Step 5: Validate the finding against your own experience. In my case, I’d been pulling chicken at 152°F and it was working great. The research confirmed I was in the right ballpark and gave me confidence to nudge up to 155°F.

    The Feeling

    There’s something deeply satisfying about this approach. It’s not the thrill of discovering some secret hack that no one else knows. It’s the opposite. It’s the relief of confirming that a bunch of thoughtful people have already debugged this problem, and their collective answer is both rigorous and practical.

    It also scratches a specific itch I have around problem-solving. I don’t want to blindly follow rules (the USDA’s 165°F), but I also don’t want to just wing it and hope for the best. I want to understand the underlying trade-offs (safety vs. texture, time vs. temperature), and I want to know where the consensus has landed after thousands of people have run the experiment in their own kitchens.

    Using AI research tools this way feels like having a tireless research assistant who can read 50 forum threads in 30 seconds and hand you a spreadsheet. It’s not replacing my thinking. It’s just doing the boring part so I can focus on the decision.

    And yeah, my chicken is better now.