Cool Managers Let Bots Talk. Smart Ones Don’t.
You can practically hear the keynotes now: “Let AI handle the messaging so leaders can focus on strategy.” Somewhere, a demo shows a smiling manager clicking “Auto-Write,” and a perfectly formatted pulse of empathy lands in everyone’s inbox. What the demo never shows is the reply-all thread from the actual humans who received it, the quiet credibility hit the manager just took, and the compliance officer who just started to sweat.
Recently, HR Dive highlighted peer-reviewed research on exactly this phenomenon: employees can spot when their bosses lean too hard on AI to write to them—and they trust those bosses less when it happens. The findings aren’t hand-wavy opinion; they’re grounded in a study of 1,100 professionals published in the International Journal of Business Communication. Low-assist editing (think Grammarly) was fine. But when AI did the composing—especially for praise, feedback, or anything that requires tone and care—perceived sincerity and trust dropped off a cliff. In other words, if the message is supposed to feel human, outsourcing the humanity backfires.
When Automation Talks for You, Your Company Owns the Words
“Let the bot answer; we’re busy,” sounds efficient—until the bot is wrong. Ask Air Canada, which was ordered to compensate a passenger after its customer-facing chatbot misstated the airline’s bereavement policy. The tribunal rejected the company’s argument that the chatbot was a “separate legal entity.” If your system says it, you said it. That precedent doesn’t just sting; it clarifies accountability in the age of automated replies.
New York City learned a similar lesson in public. It's official MyCity chatbot meant to help entrepreneurs navigate rules, told business owners it was okay to do things that are, in fact, illegal—like firing workers who complain about harassment or refusing cash payments. The city kept it online while “testing,” and the national press did the rest. This is what happens when an answer engine is treated like a search box with manners. If the output can change behavior or create exposure, you cannot “ship it and see.”
Cool Managers Don’t Let Robots Write the Warm Stuff
Back to your team. The Florida/USC study found that people were broadly okay with AI for proofreading and polishing, but viewed managers as less sincere when the AI’s role moved from brushing to painting. Acceptance plummeted for congratulations, motivation, and feedback—the very messages that set culture. The more the machine “sounded like you,” the less you sounded like you. That’s not a vibe problem; it’s a leadership one.
Does Automated Messaging Comply with Sarbanes–Oxley?
There’s no clause in the Sarbanes–Oxley Act that says, “Thou shalt not use generative AI.” What SOX does require is that public companies maintain effective internal control over financial reporting (Section 404) and robust disclosure controls and procedures (Exchange Act Rule 13a-15). In plain English: material information must be accurate, authorized, and controlled; your processes must ensure that, and you must be able to prove it. If a bot can send messages that touch policy, finance, controls, or investor-relevant disclosures without human authorization, you may be undermining the very controls SOX expects you to have. It’s not the tool that violates SOX—it’s how you design the process around it.
If you operate in regulated corners of finance, the bar is higher. The SEC has hammered firms with hundreds of millions in penalties for “off-channel” communications that weren’t preserved or supervised. Those cases weren’t about AI per se, but they’re a bright-red warning for anyone auto-sending messages from systems that your recordkeeping stack can’t capture. If your AI drafts or dispatches messages outside approved channels—or in approved channels without retention—you’ve recreated the same exposure with a shinier interface.
And if your “automated messaging” includes calls or voice drops, remember the FCC’s 2024 ruling: AI-generated voices in robocalls fall under the Telephone Consumer Protection Act. Translation: consent and other TCPA requirements apply, and regulators can fine you for skipping them. Email-style campaigns carry their own obligations under CAN-SPAM as well; monitoring what vendors send “on your behalf” is part of the law. “The bot did it” is not a defense.
Data Privacy: The Message You Didn’t Mean to Send
Automated drafting is also a data governance problem. High-profile companies have restricted or banned use of public chatbots after staff pasted sensitive source code and meeting transcripts into them—exactly the sort of leak your policies are supposed to prevent. If your AI system is cloud-hosted, logs prompts, or trains on enterprise content, you’re not just sending a message to an employee—you might be sending your secrets to the world.
The Employee’s Eye-Roll Index
Employees aren’t naïve. They notice the sudden shift to frictionless “voice,” the identical phrasing across different managers, the 3:07 a.m. timestamp, the odd emotional temperature. The University of Florida release quantifies the reaction: sincerity scores for managers crater when AI is perceived to be doing the composing. Trust isn’t built by tightening your prose; it’s built by taking the time to write it yourself when it matters. AI can proofread; it can’t care. Your team can tell the difference.
So, Should You Automate Your Messages?
Use AI like a seatbelt, not a chauffeur. Let it catch typos and tidy syntax. Don’t let it deliver praise, criticism, or anything with legal or financial teeth. For external comms, keep it on a leash: approved channels, human authorization, retention turned on, and language that’s been actually read by a person who will sign their name to it. If your company insists on auto-sending at scale, treat every automated message as if it were marketing under CAN-SPAM and as if a regulator will ask for the log. Because one day, they might.
The Part Where We Admit the Real Reason
The “AI wrote it for me” pitch promises to make managers efficient. But the moments where efficiency matters least are the ones your team remembers most. Congratulating someone. Owning a mistake. Explaining a hard call. Those are not throughput problems; they’re relationship investments. Delegating them to a machine doesn’t make you modern. It makes you absent.
Epilogue from the Sidelines
For everyone who’s received those uncanny notes that sound like corporate Mad Libs, you’re not crazy. The vibe is off because the authorship is off. If the goal is to be a better communicator, the shortcut isn’t a bot—it’s time, clarity, and a willingness to hit backspace yourself.
About the Author: Markus Brinsa is the Founder and CEO of SEIKOURI Inc., an international strategy consulting firm specializing in early-stage innovation discovery and AI Matchmaking. He is also the creator of Chatbots Behaving Badly, a platform and podcast that investigates the real-world failures, risks, and ethical challenges of artificial intelligence. With over 15 years of experience bridging technology, business strategy, and market expansion in the U.S. and Europe, Markus works with executives, investors, and developers to turn AI’s potential into sustainable, real-world impact. Contact: ceo@seikouri.com
Sources:
Trust impact of AI-written manager messages
International Journal of Business Communication study (Coman & Cardon, 2025): https://journals.sagepub.com/doi/10.1177/23294884251350599
University of Florida news release: https://news.ufl.edu/2025/08/writing-ai-work/
USC Marshall write-up: https://www.marshall.usc.edu/news/ai-assisted-emails-may-put-trustworthiness-risk-workplace-communications
HR Dive coverage: https://www.hrdive.com/news/managers-risk-loss-of-trust-by-over-relying-on-ai-written-messages/758098/
Real-world harm from automated messaging/chatbots
Air Canada liable for chatbot misinformation (tribunal case): ABA Business Law Today summary: https://www.americanbar.org/groups/business_law/resources/business-law-today/2024-february/bc-tribunal-confirms-companies-remain-liable-information-provided-ai-chatbot/ The Guardian report: https://www.theguardian.com/world/2024/feb/16/air-canada-chatbot-lawsuit
NYC “MyCity” chatbot giving illegal advice: Reuters: https://www.reuters.com/technology/new-york-city-defends-ai-chatbot-that-advised-entrepreneurs-break-laws-2024-04-04/ AP: https://apnews.com/article/6ebc71db5b770b9969c906a7ee4fae21 ; The Markup investigation: https://themarkup.org/news/2024/03/29/nycs-ai-chatbot-tells-businesses-to-break-the-law
DPD customer-service bot swearing at customers (brand hit): The Guardian: https://www.theguardian.com/technology/2024/jan/20/dpd-ai-chatbot-swears-calls-itself-useless-and-criticises-firm
Compliance & legal (Sarbanes–Oxley, SEC, FCC, CAN-SPAM)
SOX §404 (ICFR) – SEC rulemaking page: https://www.sec.gov/rules-regulations/2003/03/managements-report-internal-control-over-financial-reporting-certification-disclosure-exchange-act SEC study overview of §404: https://www.sec.gov/news/studies/2009/sox-404_study.pdf
Exchange Act Rule 13a-15 (disclosure controls): Cornell LII text: https://www.law.cornell.edu/cfr/text/17/240.13a-15
SEC “off-channel comms” enforcement (recordkeeping): SEC press releases—$390M wave (Aug 14, 2024): https://www.sec.gov/newsroom/press-releases/2024-98 ; $88M wave (Sept 24, 2024): https://www.sec.gov/newsroom/press-releases/2024-144 ; FY2024 totals: https://www.sec.gov/newsroom/press-releases/2024-186 ; Reuters recap: https://www.reuters.com/markets/us/sec-fines-11-companies-more-than-88-mln-over-record-keeping-violations-2024-09-24/
FCC: AI-generated voices in robocalls are illegal under the TCPA (Declaratory Ruling, Feb 8, 2024): https://www.fcc.gov/document/fcc-makes-ai-generated-voices-robocalls-illegal ; Ruling PDF: https://docs.fcc.gov/public/attachments/FCC-24-17A1.pdf ; AP explainer: https://apnews.com/article/a8292b1371b3764916461f60660b93e6
CAN-SPAM basics (FTC): https://www.ftc.gov/business-guidance/resources/can-spam-act-compliance-guide-business ; Rule page: https://www.ftc.gov/legal-library/browse/rules/can-spam-rule ; FCC CAN-SPAM overview: https://www.fcc.gov/general/can-spam
Data-leak risk (why many firms restrict auto-drafting)
Samsung bans staff use after leaks (Reuters): https://www.reuters.com/technology/chatgpt-fever-spreads-us-workplace-sounding-alarm-some-2023-08-11/ ; Bloomberg recap: https://www.bloomberg.com/news/articles/2023-05-02/samsung-bans-chatgpt-and-other-generative-ai-use-by-staff-after-leak
Apple restricts internal ChatGPT use (Reuters): https://www.reuters.com/technology/apple-restricts-use-chatgpt-wsj-2023-05-18/



