About Jan
My name is Jan Bakalar (Bachelor is the English translation of my surname) and I am originally from Prague, Czech Republic. Despite my keen interest in IT from early childhood, which included a lot of programming in QBasic and scripting complex batch files in MS-DOS, I chose to do a Bachelors degree in Psychology (BSc.) followed by a Masters degree (Msc.) in Organizational Psychology at the University of Nottingham.
Yet I remained faithful to I.T. and managed to combine both fields. Even for my dissertation, I chose to design an eshop where people bought holiday packages and I simulated a number of errors in it (payment failing, website crashing unexpectedly, etc.) that led to participants having to respond. I spotted some significant differences between men and women and received a high mark, leading to a first class degree – while my level of English was barely fluent at the time.
After a brief work experience as a Health Care Assistant for the NHS, my wife and I moved from England back to Prague where I worked in I.T. Helpdesk environments and build my tech skills up. I was inspired by some tech gurus like Paul Browning who motivated me to keep learning on and get certifications done.
Consequently, I studied and passed exams for the following certifications:
- ITIL v3 Foundation + Service Operations
- Cisco CCENT
- AWS Solutions Architect Associate
- ISO 20001 Auditor
- Prince 2 – Foundation
- Sig Sigma – Green belt
With growing work experience since 2008, I realized the great potential of optimizing business workflows on a large scale. See the Jan’s Previous Projects section below for more details.
Experience
How do I work?
My approach is both analytical and friendly. I try to get to know the people in the organization and get to know the existing processes before looking for ways of improving them through IT technology.
1.How
Create meaningful relations and gain deep understanding on how the business operates.
2.Why
3.What
4.Do it
Run a pilot with an insightful and carefully selected group of individuals from across teams to test and verify that the desired changes take place.
Communicate the switch-over deadline and provide training to users. Roll it out to the rest of the organization.
5.Reflect
2.Why
3.What
4.Do it
5.Reflect
1.How
How do I work?
Click on each step to find out!
Communicate the switch-over and provide training to users. Roll it out to the rest of the organization.
Create meaningful relations and gain deep understanding on how the business operates.
Jan's Previous Projects
Situation: My client complained that their workers keep using AI-related tools where they share internal company data without considering that some of this data may be used for machine learning and put the company at risk.
Behaviour: I mapped out the most common tools that each team uses and looked up the options for each using API and enterprise-based subscriptions that do not allow machine learning. For instance, OpenAI offers an affordable team-based subscription without learning on the provided inputs. I then created a web application using Next JS v14 and integrated it with their SSO provider (Google) using nextAuth and created a vector database for storing company-internal data. I hired a specialized developer on a set contract to finalize the parts that were more technical, such as connecting the vector DB to the chatbot and improving the UX/UI user experience.
Impact: By using the AI Toolkit, each worker in the company was able to use ChatGPT, Dall-E, MidJourney and other tools without having to worry that their data will leak. Thanks to the authentication and the internal database, previous conversations and generated images were kept saved for future use without being exposed.
Situation: My client requested has operated without being able to tell which of their projects are profitable in relation to the time spent and expenses that they had for their clients’ project. They asked for an easy-to-use visualization app.
Behaviour: I studied their existing processes related to timesheet recording and how they recorded expenses. Firstly, I migrated the data from spreadsheets to a database using tools like Make.com and leverage API for their timesheet system to feed the data into the database. Then I used a tool called Qlik as well as Google’s Looker Studio to make sense of the data based on the project type / time range, etc.
Impact: The client was able to identify where resources were wasted versus which projects were profitable and would benefit from further growth. They got better organized in terms of ensuring that their workers fill in their timesheets regularly and keep up the expenses. The automation process with recording and managing their expenses led to significant time savings.
Situation: An organization that I worked for was asked by its main client (Volkswagen Group) to receive TISAX certification labels for working with prototype vehicles and highly confidential data. If the deadline was missed, the client could stop doing business with the company up to 75 people who worked for this client internally would lose their jobs.
Behaviour: I carefully studied the requirements of the AL3 certification and realized that 3 major areas in the company had to be addressed:
- Documentation – internal regulations, HR and IT documentation, exchange of knowledge (KB articles), etc.
- Processes – how things are done within the company in terms of exchange of data and working environment to comply.
- People – how they operate, their level of training and readiness to regular security threats in the online world.
This project took an entire year. I hired two additional people to help me with it and secured full support from the company’s stakeholders by explaining to them the serious nature of the requirement and what it will involve to get the job done (through a series of presentations).
An example of how all of the three above mentioned areas were touched is below:
- A Data Classification & Handling Policy that described in detail which data falls under what category and how it should be handled within and outside the organization. Before doing so, I spent some time asking questions what data does the company work with and how do they handle it at the time.
- I reviewed how the file storage system was set up at the back-end side and prepared policies that will prevent users from sharing data outside the organization without protection (as publicly accessible links) and I created a process (an actual automated website form) that allows users of the organization to request a vendor or client account internally. This form with its workflow was fully automated and included an approval being granted by the respective project owner and the vendor/client account being granted automatically for the selected period of time with an option to renew the account later.
- I created training materials on what is changing and why in terms of file exchange and data classification and handling. After an approval from upper management, I presented it to every user in the company and made the materials available in the company’s extranet site.
Impact: The company passed the strictest (AL3) TISAX certification and received certification labels that allowed it to continue its operations without any disruption. While initially, the users were not thrilled about the changes to how they worked, over time, they realized that the full automation behind the processes actually increased their productivity while risk to exposing sensitive data that the company was handling was minimized.
Situation: In late October 2020, I was asked by a company to reduce their costs for file storage, since their cash flow did not allow them to renew their licensing costs for Box (due in early January 2021). The timeline for this project was therefore two months. Since they were already using G Suite for emails and calendar services, we made an agreement to use Google Drive. I was given a budget of 2,000 USD for file transfer costs and they also asked to tighten permission structure – to map out who owns what data and then who should have access to it (read only or read+write) – there was no documentation available – everyone had access to everything. The total amount of data in Box was close to 180 TB when the project commenced.
Behaviour: I leveraged access logs to identify those who have been accessing different master folders recently. I then contacted team managers of those workers to determine the data owners. I then presented the access logs to them and let them determine which users should have write, read-only or no access.
Due to the low budget related to transfer costs, I could not choose a cloud-to-cloud solution that would preserve permissions and moved the data for us. I tested some cheaper options such as Multcloud.com, yet due to the high amount of data that had to be transferered in such a short amount of time, it was not feasible to use them.
In the end, since everyone from the office was working from home and the management confirmed that this would be the case till the project’s deadline, I utilized the office’s ISP line for the transfer (I set bandwidth utilization allowance to 80% to leave some traffic for the occassional user).
Based on the complex spreadsheet of data owners, client and departmental folders, I created transfer jobs using tools like rClone & GoodSync. Whenever a specific major folder was copied over, I notified the team that was using it with a switch-over deadline, then marked the original (source) folder as read-only, performed one more sync and then applied permissions for access to the team on the new folder on GDrive. In fact, I splitted all the data into Google shared drives since this way, the data was not owned by a single Google account. I then notified the team that the transfer of that folder was done. I did that with all the folders, finishing the transfer right before the 2020 Christmas break (to give myself a break as well :)).
Impact: The business saved 60,000 USD on licensing fees. The cost of data transfer was under 500 USD, which was highly appreciated by the management. In the end, only about 80 TB of data was transferred and the rest was left on cheaper archiving storage for a future reference, if needed. Consequently, the organization saved long-term costs related to storage on Google Drive, as if the data was not separated, they would need to upgrade to a higher tier on G Suite.
Situation: As a result of the Covid-19 pandemics, I was asked to reduce costs related to financial software (Oracle NetSuite) that provided the entire organization with functionality related to project management, timesheets, job scheduling, expense reimbursement, purchase orders workflow approval and vendor management.
Behaviour: Moving from an high-budget enterprise all-in-one application onto much cheaper tools was a tricky task – there was no tool that would be significantly cheaper that could cover all of the required functionality. From my extensive research, I identified 3 project management applications that provided some of the other functionality like timesheets / expense reimbursement / purchase order workflows, but each time there was a major limitation. For instance, the app did not support multi-currency expense reimbursement or the purchase order workflow supported only one-level of approval while three levels were needed (direct manager, project manager & the finance team).
With the management, we short listed a project management & timesheet application that everyone was happy with in terms of usability and features. However, this tool had a major drawback that was a deal breaker for the client – they needed to create purchase orders and expense reimbursements with a more than one entity in the approval chain + to operate in a multi-currency environment. Therefore, since the timesheets were dealth with, I created a web app in WordPress using Gravity Forms & Gravity Flow plugins amd created customized forms with workflows that were tailored-made. I added the organization’s single sign-on application (Okta) for easy logging for the workers. After several months of development & extensive testing with several carefully selected insightful users, the application was ready for its deployment to production.
The third part of the project was to import past data to the new systems, prepare documentation and schedule training slots in different time zones to account for users in the European and North America.
Impact: The organization saved 170,000 USD on licensing costs while it benefited from more user-friendly applications for the required functionality that was more tailored to their needs. I sent a survey across the user population to gather their feedback in detail and made further cosmetic changes that led to their higher satisfaction rate.
Situation: When I worked for a smaller business in 2020, I realized that they store data on a local Synology NAS disk array. Yet it was (a) running out of space and (b) with the coming of the Covid-19 pandemics, people needed to access the files remotely, which was not possible with no VPN in place. Lastly, the NAS was a single point of failure (SPOF), which was concerning.
Behaviour: Since the organization was also using other online platforms such as Box and Google Drive, I created a Python script that utilized rClone to sync data with the required online cloud storage. With Box, there was a limitation of max file size of 32 GB. In the script, I created a method that splitted the large media files into chunks (zip without compression) and then uploaded those with MD5 checks.
Impact: About 70 TB of data was made available this way with restricted access based on group policy access in G Suite and in Box.
Situation: A new IT inventory system was deployed in a large global corporation to save all CAPEX assets in instead of using spreadsheets. However, there was no functionality created for mass importing the data. Each asset had to be manually ‘punched in’, which required going through a number of sub-pages within the web application.
Behaviour: Since the web app did not support API requests, the only method of enterring data was via the browser. I created a complex Python script that leveraged libraries such as Selenium that was able to fetch data from a spreadsheet and do all the clicking on behalf of the IT support staff.
Impact: About 120 technicians across the organization utilized the script, which saved them a number of hours of manual labour. What is more, the script was able to handle a number of errors that may occur, which helped the IT techs to be aware of inconsistencies (e.g. repeating serial numbers, hardware being out of warranty, missing data on purchase orders, etc.). Overall, the estimated saved time in the initial data import phase is 48 hours per technician = 5,760 hours of saved time.
Situation: While working for a large international corporation, the upper management showed concern about the growing number of Helpdesk tickets in certain categories, which led them having to hire more Helpdesk agents. They asked me to find out what is happening, run a root cause analysis and propose solutions that could be used both on a regional and global scale.
Behaviour: I pulled out all Helpdesk tickets from the past 6 months and drilled down into the relevant categories to understand why do they occur so often across all regions. Using the Six Sigma and Lean metodologies, I found that different Helpdesk agents choose different solutions, some of which are more and others less efficient. There were not many SOPs or they were not shared across the board. In addition, I realized that some of the issues could be helped or even prevented by automation.
I came up with a centralized repository for storing SOPs with an approval process to add more to ensure that a principal review takes place and there are no duplicates. This included training of the fellow engineers on how to best document processes and how to share them. I also created PowerShell and Python scripts that automate some of the solutions (to prevent techs from running them manually) and distributed it across Helpdesk staff. For some of the issues, we applied them as group policies to work pro-actively without the users being aware of the issue.
Impact: The result was a 15-20% drop of Helpdesk tickets per category, which saved the company IT resources equivalent to 90-100 personnel globally. Furthermore, thanks to the automation & efficient SOP processes, the average resolution time was shortened from 17 minutes to 8 minutes.