Man in hoodie sneaking around a background of code

Many community engagement professionals looked on as the recent scandal unfolded over the renaming of Lady Cilento Hospital in Queensland, Australia. Evidence came to light that a public voting process to decide whether to rename the hospital may have been tampered with or ‘gamed’.

Nearly 80% of submissions collected via the Queensland Government’s engagement portal were suspicious, coming from a small number of IP addresses, including within the government. It seems unthinkable that an engagement could be so comprehensively gamed without detection, exposing a common flaw in many government community engagement projects: a lack of risk management around dealing with suspicious activity effectively, compounded by a lack of transparency.

Digital community engagement practices have the potential to reach people and gather actionable data in meaningful and effective ways. But if you aren’t building a robust risk management plan for tampering into your engagement design, you are putting yourself at risk and doing your community a disservice.

Policy and planning decisions based on corrupted or tampered results are potentially scandalous if handled incorrectly, and can lead to wasted money and lost public confidence. It simply isn’t worth the risk.

Luckily, there’s plenty you can do about it.

Firstly, don’t panic. While result tampering can be damaging, it is actually very rare. Digital community engagement presents opportunities for meaningful participation and can be far more reliable than face-to-face public meetings with “the usual suspects”. But, like any other engagement process, it is vital to identify and manage potential risks.

Understand and anticipate motivations

There are a range of reasons people cheat – understanding them can help anticipate risky situations and help you choose the right tools for the job.

Cheating used to be thought of as a personal failing stemming from a lack of ‘moral development'. More recently, a simpler view gained traction: that we all weigh the benefits of our unethical actions against the costs of committing them, and decide accordingly. Others believe that cheating results from highly subjective reasoning processes that include how we think about the world, other people and ourselves. These processes could be subconscious, resulting from situational forces we may be unaware of. Pretty relatable, right?

So your average cheater isn’t a bad person consciously trying to cynically manipulate big decisions. Rather, they’re usually good, ordinary citizens, influenced – possibly subconsciously – by their interests, values, situation and concerns: just like you and me. Try to anticipate and understand their motivations when planning your engagement. You may even be able to turn them to your advantage and learn more about your community.

Some of the key factors that influence people to cheat include:

  • Prize money and funding. Money can be offered as a prize or incentive for participation, or given out to a program or project. If money is involved, be aware that this can shape people’s behaviour in often unpredictable ways. Be sure that there are appropriate safeguards in place or that the incentives are commensurate to the effort and value of a person’s input.
  • Big winners and losers. Special interest groups who stand to gain or lose (or perceives they may gain or lose) in the outcome of an engagement can feel pressure to cheat. This can be inflated it these groups are well organised and already vocal as they can recruit people to their cause. Think about who stands to gain or lose (or perceives they may gain or lose), and the likelihood they might cheat based on your understanding of them or past experience..
  • Critical decisions. People are more likely to cheat if there is a critical decision to be made through the engagement and they feel they have the power to shape or influence the decision. If the perception that a decision will be made solely on the numeric results of an engagement, then people will be tempted to stack the deck to get their way. Take the time to explain how decisions will be made.
  • Ease of cheating. Put simply, sometimes people just cheat because it is easy. Votes and polls are notoriously unreliable even when they aren’t gamed. A variety of unrelated variables can influence a simple vote, undermining the value of the results. Ensure your tools are appropriate for the potential impact of the outcomes.
  • Lack of accountability. People are also more likely to cheat if they are anonymous and unaccountable for their behaviour.

In summary, less oversight and accountability leads to a higher chance of cheating, regardless of who is participating.

Plan and prevent

Governments should build prevention into their engagement design from the very start. Whether your engagement will inform an important decision, or you’re just taking the pulse of the community, you should consider the risks and put steps in place to prevent tampering.

Many engagement managers fret about ‘barriers to engagement’ and meeting participation targets. But the integrity of your data should never be compromised. The truth is, if you are collecting data to inform important decisions, there should be safeguards to prevent vested interests skewing the results.

Here are some simple safeguards against cheating:

  • Don’t make it a popularity contest. Quantity ≠ quality. If participants are led to believe (even implicitly through activities such as voting) that a process rewards quantity, they are more likely to seek to manipulate the results.
  • Explain how you will use your data. People will be less inclined to cheat if they understand how the results will form just your decision-making and that cheating may not give them a greater chance of what they want.
  • Collect personal information. While not always appropriate, the more you know about your participants, the easier it is to detect patterns and determine how representative your results are.
  • Register your participants. User registration is an easy and effective way to counter this, especially if registrations are verified. This also deters repeat submitters and will ensure only truly engaged users will make submissions, improving the quality of your data.
  • Create a risk management plan. Anticipate the risks and figure out a plan to deal with the issues before they happen.
  • Use incentives sparingly. Paying people for their opinions can be risky, and we generally don’t support these types of incentives. However, they can be effective in certain situations such as competitions where a person to has spent the time creating something in hopes of being awarded a prize.
  • Choose appropriate tools. Each type of tool you select in your engagement has a different risk profile associated with it. If you are conducting an important voting process that will lead to a significant outcome, open public voting tools such as polls may not be appropriate and perhaps you should look at collecting votes through a more controlled format such as a form. You can always make the results public when the voting has concluded.

How to detect tampering

Even with a robust framework in place for planning and prevention, you should always review and validate your data.

There are a range of methods to detect tampering. Participants may cheat directly, or utilise automated scripts or robots.

Robots are usually easy to detect. Look for:

  • Identical results submitted multiple times
  • Repeated IP addresses
  • Submissions at regular, continuous intervals
  • Unrealistically high numbers of submissions.

The repetition in results from robots make them easy to detect and discount when analysing your data. If you suspect a robot is targeting your site, tell your provider – we have a range of ways to prevent and stop robots from targeting HiVE engagement sites.

Humans are less easy to detect as they will not lodge at perfectly regular intervals and may employ different language or change their user information (email address, username, postcode etc.) to try and obscure their cheating. However, few have the energy to do this in a convincing manner (we’ve never seen anyone pull it off well).

When detecting human interference, look for:

  • Repetitions in language. Most people don’t realise how idiosyncratic their language is, so this can be surprisingly easy
  • Advocates for a single point of view. Most engagements don’t go in blind. If your results differ from your other research or intelligence, interrogate them.
  • Unrepresentative samples. Special interest groups will often attempt to skew results in their favour, even if they aren’t representative of the community as a whole.
  • Unrealistically high results. Again, unrealistic trends or response numbers should be viewed skeptically.
  • Session-based metrics. IP addresses and user session IDs can be used to reveal repeat submitters.

Note that IP addresses, while useful (the Lady Cilento scandal was revealed when 18,000 votes were found to have come from just 74 IP addresses), are blunt indicators and not always reliable. Hundreds of employees at one organization may make perfectly legitimate submissions from the same IP address, and integrated engagements may use tablets or laptops to collect multiple responses. By the same token, canny individuals can abuse an engagement by making multiple submissions with different IP addresses.

Acknowledge and mitigate

If you suspect there has been suspicious activity on your engagement, it is not too late to fix things.

You may have cleaned your data, but there are a few other best practice steps to take:

  • Acknowledge the issue. Transparency builds trust - obfuscation destroys it. If you suspect tampering, outline the extent of any tampering and details about how and why it may have occurred.
  • Demonstrate the steps you took to fix it. A good example of this is the City of Calgary’s 2026 Winter Olympic Games engagement, where potential outcry over suspected tampering was diffused with a succinct list of steps to mitigate and protect the integrity of the data.
  • Explain how you will use your data – again. If special interest groups try to game your results, you can’t necessarily discount their input. But you can explain how your data fits into your overall project plan and why you are not preferencing skewed answers.

Remember, problems turn into scandals when they are deliberately obscured. The Lady Cilento affair was uncovered via a FOI. This means the opportunity to take control of the narrative and mitigate harm was lost. A transparent and proactive approach to engagement tampering can help reinforce community trust and organisational integrity, as well as diffusing potential bad publicity.

Some things to remember

Community engagement is not a popularity contest: certain people or groups should not have a greater say just because they are technologically savvy or well-organised. Good community engagement data is most effective when used alongside robust research and evidence. These things can help you ensure your data is representative, clean and useful.

Removing incentives to cheat and using the right tools for the job will also result in better, more reliable data and strengthen the community’s trust in the integrity of government engagement processes.

The community engagement practice and the technology providers that support it should also constantly look at new ways to plan for, assess and mitigate risk. More sophisticated methods of detection will help grow confidence in digital engagement methods, while a sophisticated approach to designing engagements with those tools can prevent and mitigate tampering.