By Jack Adams-Jones
In today’s data-driven world, I am constantly on the lookout for ways to leverage data to make informed decisions. Tools like Power BI have become indispensable in my work, providing powerful data visualization and business intelligence capabilities that enable me to create interactive reports and dashboards. However, mastering these tools and their underlying languages, such as DAX (Data Analysis Expressions), can be challenging. This is where AI-powered assistants like ChatGPT come in, offering real-time assistance, and enhancing my productivity. A recent survey by GitHub has highlighted the widespread acceptance of such AI tools among developers, with a staggering 92% of US-based developers working in large companies reporting the use of an AI coding tool either at work or in their personal time1.
A Glimpse into the Future of ChatGPT
The future of AI in the workplace is promising, with a potential business version of ChatGPT on the horizon. The rumoured features such as managing workspaces, inputting documents, and setting custom profile prompts could revolutionize how we interact with AI2.
Custom profiles for different departments, each tailored to the unique needs and challenges of their roles, could become invaluable resources. Imagine an investigator profile, acting as a master investigator, scouring the internet to source company information. This AI assistant could essentially become an instant messaging Wiki, using the input documents to help the investigator in their role.
Consider a compliance officer profile, a master negotiator capable of analysing input text for sentiment and drafting or modifying responses to achieve the desired result. This AI assistant could provide invaluable support in negotiations, helping to ensure the best outcomes for the company.
Or a data analyst profile, a coding guru able to generate high-quality code, suggest improvements and identify bugs. This AI assistant could use the input files to answer queries, identify outliers, find errors, and visualize data as requested. It could become an indispensable tool for data analysts, enhancing productivity and accuracy.
As AI becomes more prevalent in our work, it’s crucial to address potential risks. Apple, for instance, has restricted the use of ChatGPT among its employees due to concerns about potential leaks of confidential data3. This highlights the need for companies to adopt policies regarding the input of sensitive company data into these large language models (LLMs).
Companies big and small are shaping policies that say what employees can and can’t do with generative artificial intelligence tools such as OpenAI’s ChatGPT. For example, Samsung banned the use of generative AI tools after an employee uploaded sensitive code to ChatGPT4. On the other hand, digital business contract software company Ironclad developed its own generative AI use policy, despite the fact that its technology is empowered by generative AI itself4.
These developments underscore the importance of having clear policies in place for the use of AI tools in the workplace. As AI continues to evolve and become more integrated into our work, it’s crucial for companies to stay ahead of the curve and ensure they’re using these tools in a way that’s secure, ethical, and beneficial for all.
- GitHub finds 92% of developers love AI tools
- Upcoming ChatGPT features: file uploading, profiles, organizations and workspaces
- Apple Restricts Employee Use of ChatGPT, Joining Other Companies Wary of Leaks
- Samsung among companies starting to draft ChatGPT policies for workers