Coming from my last role as a Software Engineering Manager in an organization that had at the time not yet fully embraced GenAI (they have now), I'm still working to understand my own views on AI and how to approach it in future work. To this end, I have set up my own Open-WebUI portal at home, have subscribed to various models at Google, Azure, Together, and Fireworks, and explore scenarios in which GenAI can be useful.
Here, I'll discuss some areas I've explored as well as how I see workplace adoption playing out if I'm taking on the role of team/people manager. I'll likely add to this article later as I think of new considerations for myself. I don't at all want this list to be taken as prescriptive upon others, but just food for thought from a dev leader who happens to be thinking about this stuff.
Code Completion/Assistance
This is the most obvious area I'll have to contend with as a Software Engineering Manager. Code completion and generation is everywhere these days and as a leader there are several considerations I'll need to keep at top of mind in my role:
- If used, AI must be used effectively by team members to not only deliver value but produce code of sufficient quality that it does not introduce needless technical debt:
- I'll need to ensure code review processes are stringent enough to catch AI errors and inefficient code structures/styles;
- I'll need to keep track of and capture examples of defects or new technical debt introduced by AI usage in order to (blamelessly) work to prevent these problems from arising in future;
- Don't overlook reviews of test code. GenAI can be very good at generating unit tests but it can also make critical mistakes that could lead to a test case being ineffective or flawed. False positives and negatives in test cases could lead to major incidents.
- Avoid hiring so-called "vibe coders" (those who produce code almost entirely using AI without practical understanding of what they're producing or how it works);
- I'm not one to focus too strongly on technical skills in interviews with junior developers historically, but this may need to change now that more and more juniors are becoming dependent on AI;
- I'll need new hires to be skilled problem solvers and critical thinkers in their own right, not necessarily testing specific languages or specific tools.
- Understand that AI usage affects the cognitive abilities of those who use it, and will shift focus away from software development and toward prompt generation, review, and correction of AI output.
- Monitoring trends on the team is important here. Survey sources of extraneous cognitive load for team members and points where there are struggles;
- Perhaps track ballpark estimates of the balance of time spent writing code vs. reviewing/correcting code;
- Embrace a balance of team members who make use of AI vs. those who lean on more traditional programming. Diversity of degrees of adoption could help ensure programming skills on a team remain strong.
Job Application, Resumes, etc.
More and more, job applicants are leaning on AI to format resumes, word cover letters, and actually apply for roles. This is, to some extent, a natural response to adoption of AI by organizations in their hiring processes (eg: AI parsing and filtering of applications & resumes in ATSs, AI-penned communications to applicants, etc.). I can't fault applicants for leaning on these tools to some extent. However, there are some things that should be kept in mind as a hiring manager:
- Use of GenAI in the hiring process is untested waters legally:
- My first inclination is to avoid using AI in the hiring process though with the sheer number of applicants to roles I totally understand why first-pass AI filters are desirable. While they could be prone to major mistakes, I can also see how they're an improvement (both for the applicant and the company) over traditional keyword or phrase filter-based ranking in an ATS;
- In my opinion, resumes and cover letters are the intellectual property of their creators. They also contain PII (Personally-Identifiable Information) and so need to be handled with care. Companies and hiring managers should be explicit about whether application data will be retained, for how long, and whether that retained data will be used for AI training in future.
- As a leader I want to hire people who can produce original work rather than copy the work of others and claim it as their own:
- I don't fault people for including AI in the job application process or their work. I will, however, judge fully AI-generated resumes, letters, code samples, etc. There are many positive ways to make use of AI without having it directly do your work for you;
- If someone is willing to completely take credit for the work of AI (which was in turn cribbed from training materials questionably obtained without credit in the first place), I have to wonder about their ethics more broadly when it comes to humility and giving credit where credit is due.
- The interview process in particular is about assessing your own fit to the role. That means you and not a persona artificially constructed around you by AI:
- This is fuzzy territory as interviews have always been somewhat disconnected from the day-to-day nature of working in a company - during an interview we all try to put our best foot forward and even sometimes put on a persona we wouldn't ordinarily embody;
- AI (and human) coaching, note preparation/organization, etc. seem like perfectly reasonable ways to prepare for an interview;
- Use of AI tools (or human substitutes) to cheat during an interview or testing process are completely off-limits. If you don't know an answer, I want to hear it from you and learn that you are capable of humility. If you struggle under the pressure of an interview, that's okay and I can try to accommodate. But as soon as you try to cheat your way through it, it's over. That speaks volumes to your ethic and how you will work if hired.
Sorting Opinions and Facts about AI
Unsurprisingly, people hold various opinions of AI and we are all entitled to an opinion. We can also hold contradictory views: AI can be both problematic and important.
Opinions aside, there are many inconvenient facts about AI that can and should be discussed in and out of the workplace:
- Generative AI is subject to the biases of both its developers and its learning materials:
- Generative AI can be racist, for instance
- Generative AI hallucinates and we still don't fully understand why
- LLM training (and sometimes usage) potentially has negative impacts on the environment due to power consumption.
- GenAI training materials are usually sourced questionably:
- Meta, for instance, was caught pirating training materials.
- Getty Images is still pursuing a case against Stability AI (makers of Stable Diffusion) for scraping images on their site for AI training.
- AI censorship occurs at both the training level and the hosting level:
- Under the guise of "safety" or "guard rails" this can still pose unforeseen problems;
- While there is criticism of DeepSeek for implementing censorship of its hosted models in line with safety/guardrails and Chinese government/party lines, OpenAI likewise implements censorship of its models without much transparency about what gets censored or why.
- GenAI, as linked earlier, has (potentially negative) impacts on developer cognition according to a study by Microsoft.
- Hosted LLMs and other forms of GenAI are subject to interception and storage of prompts and responses by the hosting provider. Unless adhering to strict, certified security and storage requirements, these hosting providers are not inherently trustworthy for maintaining your privacy nor the security of the data you submit.
Given these facts, opinions are understandable and an ethical stance is commendable. That said, AI is here, it's in the workplace across industries, and being 100% averse to its usage would eliminate me from consideration for a multitude of positions in an already difficult job market.
So, it's important to be open-minded, adaptable, and willing to learn new things even if problematic. As with many other technologies and tools, I can hold contradictory views on GenAI while still working with it and working with others who use it.