Automated Jira Ticket Report Generation Per School Using Fabric & Power BI
It would be helpful for school admins or principals to see ticket activity per school per week. Seeing data like how many tickets in total, ticket number per user, tickets that were resolved vs opened would be helpful for admins to see. To do this, we should organize the data based on school first, so […]
Automating Custom Field Bulk Edits in Jira Post Migration with Fabric
We have automated around 100k tickets over to Jira and to clean the Jira production data, some fields need to be altered. By default, some tickets were assigned to Jin as a default user, but this is problematic for reporting and filtering purposes in the future. As such, we needed to change around 60k tickets […]
Bulk Editing Custom Fields in Jira
A very useful tool in Jira post migration or post API bulk updates, is the bulk edit feature for issues. You can access in the filters feature where you can first create a JQL query that will accurately list out all of the issues that need bulk editing in the same manner. In my case, […]
Real-time Most Recent Tickets (Solarwinds to Jira Service Management) Migration using Microsoft Fabric
I have two departments that I need to run this for: Facilities and IT. This initiative is being executed concurrently with UAT. Already over 150k tickets have been migrated over; however, to keep it testable for our users, we need to make sure old ticket system updates that are currently happening are somewhat replicated in […]
Confluence Knowledge Base Transition & Soft Approval Workflow
A collaborative effort in our IT department has been made to centralize documentation into confluence instead of using products like OneNote and SharePoint. This aligns with the decision to move our ticketing system to Jira Service Management due to the robust integration capabilities. A meeting was held to discuss a bare-bones document structure. This was […]
Automating Knowledge Base/ Documentation Transfer From Onenote to Confluence using Fabric
The general logic of executing the Onenote data ingest: Onenote data ingest PySpark code: import requests from pyspark.sql import SparkSession from pyspark.sql.types import StructType, StructField, StringType # Initialize Spark session spark = SparkSession.builder.appName(“OneNoteDataPipeline”).getOrCreate() # Authentication def get_access_token(): url = “https://login.microsoftonline.com/982d56ce-f6e7-4334-a1c4-5d6779c789a6/oauth2/v2.0/token” payload = { “grant_type”: “client_credentials”, […]
Chrome Extension – Ukrainian Phonetic Keyboard Mapping
There is a educational need for Ukrainian students to utilize the Ukrainian Phonetic keyboard when using their chrome book to write documents or papers. That being said, there was no Ukrainian phonetic chrome extension available and the one that was provided here: https://github.com/nagornyi/ukrainian-phonetic-keyboard-chromeos did not fulfill the requirements needed for students using a particular chrome […]
Leveraging PySpark to Automate Jira Ticket Creation
Trying to create a pyspark script in Microsoft Fabric to automate JIRA API ticket creation (post) calls against a lakehouse delta table. To do so, I have to educate myself on the pyspark syntax and framework. I will be doing so in this blog post. To reference a lakehouse delta table: Result: This was able […]
Automating Ticket Migration for Jira
I have been tasked with automating the migration of over 100k SolarWinds Helpdesk tickets over to Jira Service Management as we are preparing for the helpdesk transition. Plan: My current plan is to create an ETL on Microsoft Fabric that ingests CSV ticket data from Solarwinds and manipulate field values to fit our specific Jira […]
Microsoft Fabric & Spark SQL
The most common way to work with data in delta tables in Spark is to use Spark SQL. Lets say we have a table in OneLake called products: In the connected Spark SQL, we can insert data like so: When you want to work with delta files rather than catalog tables, it may be simpler […]