BEC in BOQ: Unfair Dismissal Seasoned with Cyber
By Ravi Nayyar
The Bank of Queensland (‘BOQ’) dismissed its long-serving employee, Kylie Smith (‘the Applicant’), on 8 May 2020. She was Branch Manager of BOQ’s Nambour Branch. BOQ sacked her because she caused the bank loss after falling victim to a business email compromise (‘BEC’) scam: (at [1])
The dismissal was as a result of the Applicant mistakenly transferring an amount of $37,500.00 being the final draw down from an owner/builder construction loan account held by a BOQ customer, to an account with the Commonwealth Bank of Australia (CBA) operated by a person who had fraudulently intercepted the emails of the BOQ customer.
Fair Work Commission Deputy President Ashbury held that BOQ had unfairly dismissed the Applicant, given the engagement of section 387 of the Fair Work Act 2009 (Cth) (‘the Act’) by the facts. The Deputy President wrote at [156] that:
Having considered the matters in s. 387, I am satisfied that on balance, the dismissal was harsh, because of its effect on the Applicant in the context of her good employment record and the length of her service. While the Applicant did engage in misconduct which was careless, her dismissal was also unreasonable because Mr Holland’s conclusion that the Applicant could not be trusted to carry out her role in future, was not soundly based.
The Applicant’s application under section 394(1) of the Act was successful.
What is Business Email Compromise?
Per Europol, business email compromise is a cybercrime type which targets (as the name suggests) organisations, with criminals socially engineering people who work for the victims to execute financial transfers, under false pretences, to bank accounts that the criminals control. An attacker can compromise the email accounts of someone within the target or a third party dealing with the target (as in this case) to impersonate them as part of the former’s tactics, techniques and procedures (‘TTPs’). As suggested by the experience of the Applicant, a BEC attack does not always involve the attacker compromising the cyber-physical infrastructure of the target organisation, such as through exploitation of a zero-day vulnerability in software used by the target’s endpoints, which can be triggered by a malware-laced email.
Over July 2019 to June 2020, the Australian Cyber Security Centre (‘ACSC’) observed an explosion in BEC. The agency predicted it to become an even more prominent cybercrime type. Such predictions would be informed by how BEC campaign operators have been prolific in targeting the Australian financial services sector, with Chinese and Middle Eastern organisations noted by the Australian Federal Police (‘AFP’) to increasingly target funds management businesses. Such is the degree of the problem that Australian organisations lost $174 million between July 2019 and November 2020 through BEC-based attacks. There were around 2000 attacks between July and November 2020, a doubling in prevalence of BEC typologies. Given the global nature of BEC — with malicious actors using transnational infrastructures to execute attacks and launder proceeds thereof — the AFP is working with its counterparts in Europe and North America to more effectively tackle this growing cybercrime category.
One prominent Australian BEC case study was Levitas Capital, a hedge fund which was victimised in September 2020. Malicious actors used a fake Zoom invite and fraudulent invoices to make the fund’s trustee and administrator transfer $8.7 million to accounts controlled by the attackers. That some of those accounts were located in Hong Kong and Singapore reflects the nature of BEC as a transnational crime type and the sophistication of attacker tradecraft, especially when combined with the reconnaissance of target networks which is required to execute the social engineering component of a BEC attack. Such was the impact of the attack on the fund’s reputation that its biggest client withdrew its funds. In turn, the fund was forced to close. This arguably brings the non-financial implications of BEC breaches, and perhaps those of an organisation’s cyber resilience more generally, into sharp relief. After all, a breach can undermine confidence in the victim entity, the market it operates in and the digital infrastructure underpinning that market.
For sake of completeness, it should be noted that, between July and December 2020, there were 34 notifiable data breaches resulting from social engineering or impersonation that were reported under the Privacy Act 1988 (Cth) to the Office of the Australian Information Commissioner (‘OAIC’), which was 14 fewer than the number reported from January to the end of June that year.
What Were the Facts of the Case?
At the time of the attack, there was a drawdown worth $37,500 remaining on a BOQ customer’s construction loan. The Applicant was instructed to release those funds to the customer. On receiving a tax invoice from the customer as required, the Applicant moved to complete the progress payment request form.
Having compromised it, the fraudster emailed the Applicant from the customer’s email account. They asked the Applicant to direct the payment to a Commonwealth Bank of Australia (‘CBA’) account in the customer’s name. The Applicant replaced the customer’s BOQ account with the CBA account in the progress payment request form. After the form was signed and sent to BOQ’s Drawdowns unit, the payment was processed.
CBA advised BOQ that the CBA account was potentially used for fraud. The name of the account holder was not that of the BOQ customer. BOQ then started investigating the fraud, finding that the Nambour Branch had breached BOQ procedure. The breach of procedure was said to have resulted in loss to the customer and bank. Following the investigation, the Applicant was dismissed.
How Was the Case Decided?
For a remedy for unfair dismissal to be ordered under subsection 390(1) of the Act by the Fair Work Commission (‘the Commission’), it must be satisfied that the Applicant:
- ‘was protected from unfair dismissal (see Division 2) at the time of being dismissed’ (per paragraph 390(1)(a)); and
- ‘has been unfairly dismissed (see Division 3)’ (per paragraph 390(1)(b)).
Paragraph 390(1)(a) was met on the facts (per [5]).
For paragraph 390(1)(b) to be met, the dismissal must satisfy the criteria under section 385 of the Act. One criterion was in issue: whether the dismissal was ‘harsh, unjust or unreasonable’ (per subsection 385(b)). To assess if the dismissal was as such, the Commission must consider factors specified by section 387 of the Act.
Per paragraph 387(a), one factor is ‘whether there was a valid reason for the dismissal related to the person’s capacity or conduct (including its effect on the safety and welfare of other employees)’. For a dismissal in relation to the person’s conduct, the Commission must identify whether: (per [119])
- on the balance of probabilities, the alleged conduct occurred on the evidence; and
- the alleged conduct ‘was of sufficient gravity or seriousness to justify dismissal as a sound, defensible or well-founded response’ (‘Limb 2’). Deputy President Ashbury cited Bista v Glad Group [2016] FWC 3009 (19 May 2016) in support of this point of law.
Deputy President Ashbury held (at [126]) that application of Limb 2 includes consideration of ‘mitigating factors which may also go directly to the validity of a reason for dismissal by mitigating the seriousness of the conduct for which a person was dismissed’. Such factors can include ‘lack of training or the dismissed employee being placed under undue pressure by some failure on the part of the employer’, thus contributing to the alleged conduct.
Limb 2 was in issue.
On one hand, the Applicant’s conduct was found to comprise: (at [129])
a simple failure to give sufficient attention to avoiding error and a failure at a number of points to appreciate that there were red flags which should reasonably have caused the Applicant to stop and check what was occurring before taking various steps that led to the fraud. The Applicant’s conduct was also a failure to take proper care in the performance of her duties.
These failures are elaborated on by Deputy President Ashbury at [130]-[134].
Deputy President Ashbury, however, held at [129] that the Applicant’s conduct was neither negligent nor an intentional failure to comply with BOQ procedure. Note that the parties agreed that the Applicant did not act dishonestly (per [2]).
Additionally, and perhaps more importantly, the Deputy President held (at [135]) that there were mitigating factors to leave Limb 2 and thus subsection 387(a) of the Act unsatisfied. These are analysed between [135]-[145] and include the:
- lack of training of the Applicant in dealing with the type of fraud in question and executing the lending transaction in question;
- qualities of the fraudster’s emails, as identified by the BOQ investigation, not being ‘so obvious that the Applicant should reasonably have noted them’ as being reflective of fraud;
- Applicant’s being under pressure in light of the effect of lower staff levels at the Nambour Branch than normal despite the branch being quite busy because of the demographic of the branch’s customers and the pandemic, and the Applicant’s going on leave; and
- Applicant’s remorse and regret regarding her conduct, combined with this case representing her first victimisation by BEC.
Deputy President Ashbury also held (at [147]-[149]) that subsections 387(b)-(c) of the Act were engaged because the Applicant was neither notified of the reason for her dismissal nor given an opportunity to respond to it. At [153], it was held that subsection 387(h) was engaged by ‘the Applicant’s length of service and her excellent work record’, which BOQ did not consider much in its decision to dismiss the Applicant.
Hence, in light of the factors under section 387 of the Act, Deputy President Ashbury held (at [156]) that the Applicant’s dismissal was harsh ‘because of its effect on the Applicant in the context of her good employment record and the length of her service’. The dismissal was also unreasonable because BOQ’s determination that ‘the Applicant could not be trusted to carry out her role in future’ lacked a rational basis. The Applicant’s application under section 394(1) of the Act was thus successful.
The matter will be listed for Mention to program proceedings for determining the remedy after further submissions, as requested by the parties.
What are the Key Lessons?
While Smith v Bank of Queensland was an unfair dismissal decision, it arguably carries a few key lessons for organisations in relation to cyber resilience. It is ironic that the Applicant’s Area Manager can be argued to have highlighted the cyber dimension (at [77]) in an email justifying the Applicant’s dismissal: ‘This issue is not a lending issue, rather an issue of identifying risk and putting steps in place to protect the bank’ (emphasis added).
This article adopts the Australian Securities and Investments Commission’s (‘ASIC’) definition of cyber resilience as ‘the ability to prepare for, respond to and recover from a cyber attack’.
Insider Risk
The risk posed to organisational cyber resilience by employees and other insiders is a serious one which organisations must tackle. Whether or not insiders act maliciously, they have the potential to adversely impact their organisation’s information assets to the same degree as outsiders. This potential is writ large in how insiders contributed to 35 notifiable data breaches that were reported to the OAIC from July to the end of December 2020, an increase from the 25 reported in the previous six-monthly period. More generally, an analysis of 3,950 data breaches, that occurred worldwide from 1 November 2018 to 31 October 2019, found that:
- 30% of them involved insiders;
- 22% included errors as causal events; and
- 8% constituted misuse by authorised users.
The insider threat is especially live in relation to BEC because of how the attacker targets insiders in order to cause loss to the organisation.
In this regard, the case draws attention to the importance for employees to be trained to detect and tackle fraud to help them better distinguish between a fraudster’s communications and those from an actual customer, and thus minimise the potential for them to jeopardise their organisation’s cyber resilience. Such an issue was evident in the present case (at [24]): the Applicant said that she did not regard the fraudster’s request that the funds be transferred to a CBA bank account because the customer was ‘missing on some payments’ as suspicious. Similarly, Deputy President Ashbury highlighted a lack of training as contributing to the breach (at [135]):
The Applicant… was not provided with training in relation to fraud of the kind that she encountered… She had not encountered fraud in all her years of employment at the Bank. Quite simply, the Applicant had no basis to suspect fraud and no real life experience to apply to the situation she was faced with. (Emphasis added)
This case also teaches us that organisations, in addition to providing employees with counter-fraud training, must update that training in line with evolving BEC typologies to ensure that employees, and thus organisational cyber resilience, are not left behind the curve of attackers’ TTPs. Deputy President Ashbury’s words reinforce this (at [135]):
Even if the Applicant was given some training at the time she was promoted to Branch Manager, that promotion occurred in 2014… Email scams of the kind perpetrated on the Applicant have been more prevalent in recent times. (Emphasis added)
Note that ASIC regards staff training and awareness of cyber risk, and their continuous improvement, as examples of cyber resilience good practice. The regulator highlights that training must be ‘effectively managed and monitored against success criteria’. This is to help ensure that employees can identify cyber risk and represent ‘an effective defence against malicious cyber activities by preventing incidents arising from attempted phishing attacks and other forms of social engineering [that can include BEC attacks]’.
Organisations should also work towards ensuring that employees who are tasked with executing a particular function or transaction are appropriately trained in said execution. This can strengthen employees’ understanding of organisational procedures and how malicious actors can exploit weaknesses in (execution of) those procedures. The present case highlighted the role of improper execution of BOQ procedures (at [131]), and the employee’s lack of training regarding those procedures (at [140]), in facilitating the BEC attack. Training in organisational procedures, like counter-fraud training, should be tailored to employees’ roles and enable them to tackle BEC risk which they are likely to face on a day-to-day basis. It is thus rather shocking that BOQ seemed more concerned about its reputation after the breach than recognising its failure to appropriately train the Applicant:
[The Applicant’s Area Manager] stated that he advised the Applicant she should consider resigning from her employment as his concern was that if she were terminated and the reasons were disclosed, it would have repercussions from an industry reputational perspective. (At [66], emphasis added)
[The Applicant’s Area Manager] accepted that he was aware the Applicant was not trained as a lender, and the type of transaction in the current case was one that would usually be undertaken by a lender. Mr Holland agreed that the Applicant was performing this role as there was not a lender in the Nambour Branch. (At [69], emphasis added and footnote omitted)
Organisations should combine human-directed controls like counter-fraud training with technological controls to enable employees to better identify fraud. These can include artificial intelligence algorithms (‘AI’) that can help determine if an email or its attachment(s) are malicious. With AI potentially augmenting the capability of human decision-makers in cyber resilience more generally, such technological controls can mitigate the impact of employee stress or fatigue on the detection of BEC scams, and rectify the issue of red flags from the metadata of BEC emails not being readily visible to human observers, an issue raised by the present case (at [137]). AI can analyse customer conversation histories or monitor an organisation’s networks more efficiently than humans to establish patterns of life and identify where those patterns are broken by, say, a fraudster who is impersonating a customer. Organisations would also be advised to invest in appropriate technological solutions because of malicious actors (being foreshadowed as) using technologies like AI to automate attacks — which can include BEC campaigns — at scale, or impersonate senior employees via audio deepfakes to socially engineer junior members of an organisation.
Greater Vulnerability during Crises
Insider risk operates in the context of the greater vulnerability of organisations to breaches of cyber resilience during crises. The Covid-19 pandemic is a good case study thereof. The greater attack surface of organisations during the pandemic is arguably reflected in how there were over 45 covid-themed cybercrime and cyber security incidents reported to the ACSC between 10 and 26 March 2020. The OECD points to stress placed on organisations’ information assets and employees as organisations rapidly moved operations online as a risk factor: ‘Fast implementation of new or temporary digital infrastructures, and adoption of new processes on the fly are likely to create weak links that malicious actors are continuously seeking to exploit’. In seeking to ensure that said operational shift was smooth and swift, organisations may have overlooked the maintenance of their controls for cyber resilience. Geoff Summerhayes, an Executive Board Member at the Australian Prudential Regulatory Authority (‘APRA’), reported in November 2020 that a majority of APRA-regulated entities ‘have [not] gone back to firmly close the gates they left ajar in March’, and how the regulator is aware of some entities’ employees sending confidential information to personal email addresses.
Note that the role of employee stress in breaches of cyber resilience was evident in the present case (at [141]) because the Applicant executed the relevant transactions while managing a branch with fewer than normal staff, despite it being busier than normal because of its customers’ demographic and the effect of the pandemic.
An Organisation-Wide Effort
Smith v Bank of Queensland highlights that cyber resilience is a mission which spans an entire organisation, rather than a tick-box exercise which can be resolved by that organisation scapegoating an employee, especially if that employee did not act maliciously in failing to identify a BEC attack which they were not trained to actually identify. Per ASIC, those responsible for oversight (such as Board members) must own their organisation’s cyber resilience. They must approach its management as a ‘critical management tool’ for gauging operational risk and designing governance strategies in response. The OECD Council echoes this in recommending that organisations ‘act responsibly and be accountable, based on their roles, the context and their ability to act, for the management of digital security risk’. Australia’s Cyber Security Strategy also identifies the responsibility of businesses to secure their networks, supply chains and customers as a key component of assuring Australia’s cyber resilience.
As above, organisations must invest in a risk-based response to threats to cyber resilience. Their controls must enable detection of, response to, and recovery from, breaches. The controls’ effectiveness should be assessed regularly against objective criteria, including via penetration testing. This should be complemented by risk assessment and reporting mechanisms that ensure that the stewards of an organisation (like the Board) are kept apprised of its evolving attack surface.
Governments must assist SMEs in instituting and maintaining these controls, given the level of monetary investment and expertise which is required. It is encouraging that Australia’s Cyber Security Strategy contains concrete policy measures in this regard, such as better liaison between government agencies and SMEs, as well as the provision of guidance on cyber resilience (like the ACSC’s Small Business Cyber Security Guide).
The stakes are high for organisations to get cyber resilience right because of the litany of costs otherwise, including: (per the U.S. Securities and Exchange Commission and Moody’s)
- technical remediation costs after breach(es);
- costs of upgrading (protection of) organisational networks and hiring more cyber resilience experts;
- loss of revenues and valuable intellectual property;
- harm to reputation, value and competitiveness;
- legal and regulatory risk;
- higher insurance premiums; and
- lower credit ratings.
It’s People, Stupid
Given that human beings are ultimately responsible for designing and implementing organisational controls, and accountable for failures thereof (see eg guidance from ASIC and APRA), organisations must ensure that their employees and stewards are motivated to work towards cyber resilience.
Cybersecurity advocates may play a key role in this regard as ‘security professionals who promote, educate about, and motivate adoption of security best practices and technologies as a major component of their jobs’ (per Haney and Lutters). They can help drive reform of organisational culture in favour of prioritising cyber resilience and implement more tailored training programs. Organisations can use cybersecurity advocates to better resource, enable and empower their employees and stewards to tackle cyber risk in a more effective fashion. As Haney and Lutters highlight, cyber resilience cannot be achieved solely through technological means, but requires ‘addressing the interpersonal, societal, economic, and organizational factors’ in play.
Who knows, if BOQ had better trained, motivated and resourced the Applicant, would this breach have occurred?
Food for thought.