How we saved a company from launching a “new” idea that already existed.
How a well-known Australian cybersecurity awareness company used BBT's Shark Tank Evaluator to avoid an expensive mistake, and ended up award-nominated instead.
There's a version of this story where a company burns six to twelve months of engineering time, a serious chunk of budget, and a lot of goodwill building something the market has already seen. They launch. Someone in procurement pulls up KnowBe4 and says "how is this different from what we already have?" and the wheels start coming off. They spend the next year on the back foot, trying to explain a distinction that buyers don't care about.
That version didn't happen. Here's the one that did.
The Idea Was Good. That Was Part of the Problem.
A well-established Australian cybersecurity awareness company came to us with a concept they were genuinely excited about. And to be fair to them, on the surface it made a lot of sense.
Their business was built on cybersecurity awareness training, helping SMEs, schools, charities, and not-for-profits get their people to make smarter decisions online. The problem they'd been solving for years is well-documented: most cyber incidents aren't caused by sophisticated attacks. They're caused by people. Clicking links they shouldn't click, approving dodgy-looking invoices because they're under pressure, reusing the same password they've had since 2011.
The standard fix the industry sells is: train people better. Run awareness modules. Do phishing simulations. Make everyone sit through the annual "don't click the link" presentation.
Their insight (and it was a genuinely good one) was that training doesn't actually help much in the moment. Someone flat out at the end of the day, staring at a convincing-looking invoice email, isn't mentally reaching back to a module they did six months ago. They click.
So the idea was: what if instead of periodic training, you built an AI-powered real-time safety companion? Something running quietly in the background that spots a risky action (a suspicious link, an unusual payment request, a sketchy file attachment) and nudges the user before they do the dumb thing rather than sending a report about it after. Behaviour change at decision time, not a debrief three days later.
They called the concept "Psybersecurity," a behaviour-first framing that put psychology at the centre rather than yet another tech feature list.
Solid problem, smart framing, underserved market. None of which meant it was safe to build.
Why Confidence Is Not the Same as Evidence
Here's the thing about working in a market for years: you stop being able to see it clearly.
You've lived the problem. You've heard it from customers. You've watched the existing solutions fall short. From where you're standing, the gap is obvious. The idea to fill it feels inevitable.
From the outside looking in, a lot of "obvious gaps" turn out to be territories that other companies have already been quietly building in. The problem isn't that the idea is bad. The problem is that founders and product teams are almost constitutionally incapable of being objective about their own ideas. That's not a criticism, it's just how it works.
This is what makes technology product launches so expensive when they go sideways. And it's why this company, to their credit, decided to pressure-test the idea properly before committing serious resources. That brought them to the BBT Shark Tank Evaluator.
What the Shark Tank Evaluator Actually Does
Quick note on how this works before we get into what we found , because it isn't a business plan review, a SWOT exercise, or a facilitated workshop where everyone leaves feeling validated.
The Shark Tank Evaluator is a structured evaluation system for decisions where getting it wrong costs real money: new product launches, market entries, strategic pivots, R&D commitments to concepts that haven't been tested. It was built because most idea evaluation processes are, frankly, designed to say yes. This one is designed to find out what's actually true.
It runs four modules:
EPAS - Existence & Prior Art Scan Before anything else, we find out whether the thing you're proposing already exists. Not a quick Google. A systematic search using category labels, buyer problem language, outcome terms, and geography. We're looking for close matches, adjacent solutions, and what your target buyers are already using to solve the same problem. If strong equivalents exist and your differentiation doesn't hold up, scores get capped. The EPAS is specifically designed to stop a good story from papering over what is, functionally, a renamed version of something that's been on the market for years.
BIVI -Business Idea Viability Index This tests whether the idea can actually become a commercially viable business under real conditions. Not "does it make sense in a boardroom" - but can it survive procurement scrutiny, realistic delivery costs, the grind of SME sales cycles, and customer acquisition economics in the specific segment you're targeting? Every domain gets an evidence tag: Observed (we've verified it), Proxy (reasonable inference), or Assumption (a guess). A score propped up by assumptions isn't a green light. It's a list of things you need to go prove before spending up.
BMCI - Business Move Compounding Index The defensibility test. Does this move build durable competitive advantage, or is it something a bigger competitor can copy in under twelve months? We look at whether you control a distribution chokepoint, whether you're building a data moat or an ecosystem that locks in value, whether switching costs accumulate in your favour over time. If a well-resourced competitor can replicate the core mechanism quickly, it isn't a strategic move. It's a temporary product feature.
BGPI - Brand Growth Probability Index This one often surprises people. It evaluates whether the brand and marketing system behind the idea gives it a real shot at long-term growth: distinctive assets, category entry points (the specific buying triggers you need to own in your buyers' heads), availability, trust signals. A genuinely differentiated product with a forgettable brand dies quietly. It happens more than anyone likes to admit.
The output is a verdict (GO / GO-WITH-GATES / HOLD / REDESIGN / KILL), a composite score, a breakdown by section with evidence confidence ratings, the make-or-break factors, and a 30–90 day proof plan that specifies exactly what needs to be demonstrated before committing serious money.
No cheerleading. It's a due diligence instrument.
What We Found: The Idea Was Not Novel
The EPAS was where things got uncomfortable.
When we went looking at what was already in the market, we didn't find one or two competitors operating in adjacent territory. We found a well-funded, globally established category doing exactly what was being proposed.
KnowBe4's SecurityCoach, one of the biggest security awareness platforms on the planet, was already delivering behaviour-based nudges triggered by risky user actions. Mimecast Engage combined behavioural training with contextual, in-the-moment interventions. Proofpoint was already doing email threat detection with contextual user warnings built in. IRONSCALES was running AI phishing protection. Closer to home, Phriendly Phishing had an Australia-focused simulation and awareness platform already in market. And usecure was specifically targeting SMEs with training and phishing simulations.
Six close functional equivalents. Most with significant market presence. Several were already integrated into Microsoft 365 and Google Workspace, which is exactly the environment this product would need to operate inside.
EPAS verdict: High Match. The core mechanism being proposed, behavioural nudges delivered at the point of risk, was not a new idea. It was the category definition.
That didn't kill the concept. What it did was change the actual question. "Should we build this?" became: "Given that this already exists and is well-resourced, what would it actually take to win?"
Those are very different questions to be answering before you write a single line of code.
The Scores Told a Specific Story
The three indexes landed at a confidence-adjusted composite of roughly 41 out of 100. Verdict: GO-WITH-GATES. A real opportunity, but only if specific conditions get met before serious investment goes in.
Here's what the scorecard actually said:
Business Idea Viability (BIVI): 58/100 Problem severity: five out of five. Human error as a breach vector is well documented and this is a genuine, costly problem. Buyer urgency was real but patchy across segments; schools and charities are highly exposed but chronically underfunded. Market size was legitimate. Differentiation scored two out of five, capped because the EPAS had already found the category. Unit economics were unproven and delivery complexity was higher than the initial concept suggested. The summary: commercially plausible, strategically exposed.
Strategic Move Strength (BMCI): 42/100 This is where the report got direct. The proposed move was a product-evolution play, not a structural competitive move. There was no control point: no distribution chokepoint, no proprietary dataset, no ecosystem integration that would make displacement costly. The core software logic was assessed as replicable by a well-resourced competitor within twelve to twenty-four months. Without something that compounds over time, the business was one well-funded competitor pivot away from being squeezed out of a category it helped reinvigorate.
Brand Growth Probability (BGPI): 49/100 The "cyber mate" narrative had genuine legs and the behaviour-first framing was smart. But there were no distinctive brand assets, no strong category entry points owned, and no trust proof that would give an SME buyer confidence to choose an unfamiliar name when KnowBe4 or their IT managed service provider already offered something similar.
The report also surfaces what it calls the Brutal Shark Question — the one question that, if you can't answer it specifically, signals the whole thing needs more work:
"Why would an SME buy this instead of the security tools already bundled in their Microsoft 365 subscription?"
At evaluation time, the answer wasn't tight enough. That gap was the work.
The Pivot: From "Another Platform" to Somewhere Defensible
Here's where the Shark Tank Evaluator earns its keep. It doesn't just tell you what's wrong — it maps what would need to be true to move the score materially, and by how much.
The report laid out three strategic paths to shift the composite from ~41 toward 70+.
Path 1: Build a Behaviour Intelligence Data Moat
Instead of just delivering nudges, instrument every interaction. Every phishing simulation response, every hesitation before a risky click, every warning acknowledged or ignored, every risk pattern by role and department across the customer base. Over time, that becomes something the major vendors don't have: a proprietary human behaviour dataset specific to SMEs and the not-for-profit sector. The product stops being a training tool and becomes a Human Risk Intelligence Platform: predictive risk scoring, organisational benchmarking, behavioural forecasting. The more organisations use it, the harder it becomes to displace. That's what a real data moat looks like and it's genuinely difficult to replicate once it reaches scale.
Path 2: Lock In MSP Distribution
Distribution beats product innovation in most technology categories, and cybersecurity is no exception. MSPs already have trusted relationships with thousands of SMEs and manage their security stacks. If this platform gets embedded as the default behaviour layer in MSP security packages (sitting alongside email protection, endpoint, and backup) it stops needing to win individual procurement battles. A competitor then has to dislodge it from entire MSP portfolios rather than from individual accounts. That's a structurally different and much harder problem to mount.
Path 3: Own the "Human Firewall" Category
New categories let smaller companies compete on their own terms. "Human Firewall" reframes the competition entirely: traditional cybersecurity protects systems, this protects people. The old model runs annual compliance modules and sends reminder emails. The new model runs continuously at the point of decision. A company that stakes out this territory clearly, builds distinctive brand assets around it before the big vendors notice, and does it consistently, can become the default name buyers reach for when human cyber risk comes up. Category ownership is the most durable competitive position there is, and it's available to whoever claims it first with conviction.
The strongest play combines all three. MSP distribution gets you inside thousands of SMEs fast. The behavioural data you collect at scale becomes an increasingly difficult moat to cross. And Human Firewall category ownership means you're the name people associate with the problem — which compounds over time in ways that ad spend alone can't replicate.
What Happened Next
The company took the report seriously, which, to be honest, isn't always the reaction we get when the scores come back below 50. They didn't push back on the EPAS findings or try to argue that the incumbents weren't really doing the same thing.
They used the evaluation to restructure their priorities. Development focus shifted toward the structural moves rather than feature parity. The SME positioning got sharper. They started building toward distribution rather than just product.
Since the evaluation, the company has been nominated for two industry awards:
AUS CYBER 2026 - Cyber Training Business of the Year
2026 Small Business Champions Award - Information Technology Category
That's not nothing in a sector where the incumbents have deep pockets and established reputations. Those nominations reflect market recognition of a company that looks and acts like a genuine category player — which is a very different thing from being a well-intentioned startup with a feature list.
The Bit That Mostly Gets Skipped
Most idea evaluation processes are designed, consciously or not, to reach yes. You build a deck, get the room excited, look for evidence that supports the direction you've already chosen. The absence of an obvious fatal flaw gets treated as validation. Confidence gets mistaken for evidence.
The bill arrives later. A procurement officer asking the hard question your pitch didn't answer. A sales cycle that stalls because you haven't established trust with buyers who've been burned before. A competitor that copies your feature set within a year because you never built anything that was actually hard to replicate.
In the Australian B2B market, long sales cycles, conservative buyers, procurement processes that treat unknown vendors like a liability, the cost of going to market underprepared is steep. You don't just lose the deal. You lose eighteen months of momentum and a chunk of credibility that takes years to rebuild.
Running an evaluation like this before committing isn't about lacking confidence in your idea. It's about finding out which specific questions you haven't properly answered yet, before the market finds them for you.
Is Your Idea Ready for the Sharks?
If you've got a product concept, a market entry, a strategic pivot, or a significant growth move on the table — this is the evaluation it should go through first.
Not a motivational workshop. Not a consultant who tells you what you want to hear. A structured, evidence-graded process that tells you what's real, what's missing, and exactly what it would take to make it win.
(fee applies for full Shark Tank Report)
Or talk to us directly: grant.belcher@bigbrandtheory.com.au
Big Brand Theory is a brand strategy and battle intelligence agency working with Australian advanced manufacturing, tech, cyber, and B2B firms. The Battle Lab is where ideas get properly stress-tested - and where the ones that survive come out with a plan worth backing.