Data Science – ORFIUM https://www.orfium.com Liberating the true value of content Fri, 04 Aug 2023 06:47:47 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.3 https://www.orfium.com/wp-content/uploads/2023/02/blue-logo-2.svg Data Science – ORFIUM https://www.orfium.com 32 32 ORFIUM’s BI Toolkit and Skillset https://www.orfium.com/data-science/orfiums-bi-toolkit-and-skillset/ Wed, 19 Apr 2023 15:58:30 +0000 http://www.orfium.com/?p=4500

We’re back

As described in a previous blog post from ORFIUM’s Business Intelligence team, the set of tools and software used varied as time progressed. A shorter list of software was used when the team was two people strong, a much longer and more sophisticated one is being currently used.

2018-2020

As described previously, the two people in the BI team were handling multiple types of requests from various customers from within the company and from ORFIUM’s customers. For the most part, they were dealing with Data Visualization, as well as a small part of data engineering and some data analysis.

Since the team was just the two of them, tasks were more or less divided in engineering and analysis vs visualization. It is simple to guess that in order to combine data from Amazon Athena and Google Spreadsheets or ad-hoc csv’s, a lot of scripting was used in Python. Data were retrieved from these various sources and after some (complex more often than not) transformations and calculations the final deliverables were some csv’s to either send to customers, or load on a Google Sheet. In the latter case a simple pivot table was also bundled in the deliverable in order to jump start any further analysis from the, usually internal, customer.

In other cases where the customer was requesting graphs or a whole dashboard, the BI team was just using Amazon Athena’s SQL editor to run the exploratory analysis, and when the proper dataset for analysis was eventually discovered, we were saving the results to a separate SCHEMA_DATASET in Athena itself. The goal behind that approach was that we could make use of Amazon’s internal integration of their tools, so we provided our solutions into Amazon Quicksight. This seemed at that moment the decision that would provide deliverables in the fastest way, but not the most beautiful or the most scalable.

Quicksight offers a very good integration with Athena due to both being under the Amazon umbrella. To be completely honest, at that point in time the BI Analyst’s working experience was not optimal. From the consumer side, the visuals were efficient but not too beautiful to look at, and from ORFIUM’s perspective a number of full AWS accounts was needed to share our dashboards externally, which created additional cost.

This process slightly changed when we decided to evaluate Tableau as our go-to-solution for the Data Visualization process. One of the two BI members at that time leaned pretty favorably towards Tableau, so they decided to pitch it. Through an adoption proposal for Tableau, which was eventually approved by ORFIUM’s finance department, Tableau came into our quiver. Tableau soon became our main tool of choice for Data Visualization. It allows better and more educated decisions to be made from management, and is able to showcase the value that our company can offer to our current and potential future clients.

This part of BI’s evolution led to the deprecation of both Quicksight and python usage, as pure SQL queries and DML were developed in order to create tables within Athena, and some custom SQL queries were embedded on the Tableau connection with the Data Warehouse. We focused on uploading ad-hoc csv’s or data from GSheets on Athena, and from there the almighty SQL took over.

2021- 2022

The team eventually grew larger and more structured, and the company’s data vision shifted towards Data Mesh. Inevitably, we needed a new and extended set of software.

A huge initiative to migrate our whole data warehouse from Amazon Athena to Snowflake started, with BI’s main data sources playing the role of the early adopters. The YouTube reports were the first to be migrated, and shortly after the Billing reports were created in Snowflake. That was it, the road was open and well paved for the Business Intelligence team to start using the vast resources of Snowflake and start building the BI-layer.

A small project of code migration so that we use the proper source and create the same tables that Tableau was expecting from us, turned into a large project of restructuring fully the way we worked. In the past, the python code used for data manipulation and the SQL queries for the creation of the datasets to visualize were stored respectively in local Jupyter notebooks and either within View definitions in Athena or Tableau Data source connections. There was no version control; there was a Github repo but it was mainly used as a code storage for ad-hoc requests, with limited focus on keeping it up to date, or explaining the reasoning of updates. There were no feature branches, and almost all new commits on the main one were adding new ad-hoc files in the root folder and using the default commit message. This situation, despite being a clear pain point of the team’s efficiency, emerged as a huge opportunity to scrap everything, and start working properly.

We set up a working guide for our Analysts: training on usage of git and Github, working with branches, PullRequest templates, commit message guidelines, SQL formatting standards, all deriving from the concept of having an internal Staff Engineer. We started calling the role Staff BI Analyst, and we indeed currently have one person setting the team’s technical direction. We’ll discuss this role further in a future blog post.

At the same time we were exploring options on how to combine tools so that the BI Analysts are able to focus on writing proper and efficient SQL queries, without having to either be fully dependent on Data Engineers for building the infrastructure for data flows, or requiring python knowledge in order to create complex DAGs. dbt and Airflow surfaced from our research and, frankly, the overall hype, so we decided to go with the combination of the two.

Initially the idea was to just use Airflow, where an elegant loop would be written so that the dags folder would be scanned, and using folder structures and naming conventions on sql files, only a SnowflakeOperator would be needed to transform a subfolder in the dags folder to a DAG on the AirflowUI, with each file from the folder would be a SnowflakeOperator task, and the dependencies would be handled by the naming convention of the files. So, practically a folder structure as the one shown to the right would automatically create a Dynamic Dag as shown on the left.

No extra python knowledge needed, no Data Engineers needed, just store the proper files with proper names. A brief experimentation with DAGfactory was also implemented but we soon realized that the airflow should just be used as the orchestrator of the Analytics tasks, and the whole analytics logic should be handled by something else. All this was very soon abandoned when dbt was fully onboarded to our stack.

Anyone who works in the Data and Analytics field must have heard of dbt. And if they haven’t already, they should. This is why there is nothing too innovative to describe about our dbt usage. We started using dbt from early on in its development, having first installed v0.19.1, and with an initial setup period with our Data Engineers, we combined Airflow with dbt-cloud for our production data flows, and core dbt CLI for our local development. Soon after that, and in some of our repos, we started using github actions in order to schedule and automate runs of our Data products.

All of the BI Analysts in our team are now expected to attend all courses from the Learn Analytics Engineering with dbt  program offered at dbt Learn regarding its usage. The dbt Analytics Engineering Certification Exam remains optional. However, we are all fluent with using the documentation and the slack community. Generic tests dynamically created through yml, alerts in our instant messaging app in case of any DAG fails, and snapshots are just some of the features we have developed to help the team. As mentioned also above, our Staff BI Analyst plays a leading role in creating this culture of excellence.

There it was. We endorsed the Analytics engineering mindset, we reversed the ETL and implemented ELT, thus finally decoupling the absolute dependency on Data Engineers. It was time to enjoy the fruits of Data Mesh: Data Discovery and Self Service Analytics.

2023-beyond

Having more or less implemented almost all of ORFIUM’s Data Products on Snowflake with proper documentation we just needed to proceed to the long awaited data democratization. Two key pillars of democratizing data is to make them discoverable and available for analysis by non-BI Analysts too.

Data Discovery

As DataMesh principles dictate, each data product should be discoverable by its potential consumers, so we also needed to find a technical way to make that possible.

We first needed to ensure that data were discoverable. For this, we started testing out some tools for data discovery. Among the ones tested was Select Star Select Star,, which turned out to be our final choice. During the period of trying to find the proper tool for our situation, Select Star was still early in its evolution and development so, after realizing our sincere interest, they invested in building a strong relationship with us, consulting us closely when building their roadmaps, while having a very frequent communication looking to get our feedback as soon as possible. The CEO herself Shinji Kim was attending our weekly call helping us make not just our data discoverable to our users, but the tool itself easily used by our users in order to increase adoption.

Select Star offered most of the features we knew we wanted at that time, and it offered a quite attractive pricing plan which went in line with our ROI expectations.

Now, more than a year after our first implementation, we have almost 100 users active on Select Star, which is a pretty part of the internal Data consumer base within ORFIUM, given that we have a quite large operations department of people who do not need to access data or metadata.

We are looking to make it the primary gateway of our users to our data. All analysis, even thoughts, should start by using Select Star to explore if data exist.

Now, data discovery is one thing, and documentation coverage is another thing. There’s no point in making it easy for everyone to search for table and column names. We need to add metadata on our tables and columns so that the search results of Select Star parse that content too, and provide all available info to seekers. Working in this direction we have established within the Definition of Done of a new table in the production environment a clause that there should be documentation on the table and the columns too. Documentation on the table should include not only technical stuff like primary and foreign keys, level of granularity, expected update frequency etc, but also business information like what is the source for the dataset, as this varies between internal and external producers. Column documentation is expected to include expected values, data types and formats, but also business logic and insight.

The Business Intelligence team uses pre-commit hooks in order to ensure that all produced tables contain descriptions for all the columns and the tables themselves, but we cannot always be sure of what is going on in other Data products. As Data culture ambassadors (more on that on a separate post too), BI has set up a data coverage monitoring dashboard, in order to quantify the Docs coverage of tables produced by other Products, raising alerts when the coverage percentage falls below the pre-agreed threshold.

Tags and Business and Technical owners are also implemented through Select Star, making it seamless for data seekers to ask questions and start discussions on the tables with the most relevant people available to help.

Self-Service Analytics

The whole Self-Service Analytics initiative in ORFIUM, as well as Data Governance, will be depicted in their very own blog posts. For now, let’s focus on the tools used.

Having all ORFIUM Data Products accessible on Snowflake and discoverable through Select Star, we were in position to launch the Self-Service Analytics project. A decentralization of data requests from BI was necessary in order to be able to scale, but we could not just tell our non-analysts “the data is there, knock yourself out”.

We had to decide if we wanted Self-Service Analysts to work on Tableau or if we could find a better solution for them. It is interesting to tell the story of how we evaluated the candidate BI tools, as there were quite a few on our list. We do not claim this is the only correct way to do this, but it’s our take, and we must admit that we’re proud of it.

We decided to create a BI Tool Evaluation tool. We had to outline the main pillars on which we would evaluate the candidate tools. We then anonymously voted on the importance of those pillars, averaging the weights and normalizing them. We finally reached a total of 9 pillars and 9 respective weights (summing up to 100%). The pillars list contain connectivity effectiveness, sharing effectiveness, graphing, exporting, among other factors.

These pillars were then analyzed in order to come up with small testing cases, using which we would assess the performance in each pillar, not forgetting to assign weights on these cases too, so that they sum up to 100% within each pillar. Long story short we came up with 80 points to assess each one of the BI tools.

We needed to be as impartial as possible on this, so we assigned two people from the BI team to evaluate all 5 tools involved. Each BI tool was also evaluated by 5 other people from within ORFIUM but outside BI, all of them potential Self-Service Analysts.

Coming up with 3 evaluations for each tool, averaging the scores, and then weighting them with the agreed weights, led us to an amazing Radar Graph.

Though there is a clear winner in almost all pillars, it performed very poorly in the last pillar, which contained Cost per user and Ease of Use/Learning Curve.

We decided to go for the blue line which was Metabase. We found out that it would serve >80% of current needs of Self-Service Analysts, with very low cost, and almost no code at all. In fact we decided (Data Governance had a say on this too) that we would not allow users to be able to write SQL queries on Metabase to create Graphs. We wanted people to go on the Snowflake UI to write SQL, as those people were few and SQL-experienced, as they usually were backend engineers.

We wanted Self Service Analysts to use the query editor, which simulates an adequate amount of SQL features, in order to avoid coding at all. If they got accustomed to using the query builder, then for the 80% of their needs they would have achieved this with no SQL, so the rest of the Self-Service Analysts (the even-less tech savvy) would be inspired to try it out too.

After ~10 months of usage (on the Self-Hosted Open Source version costing zero dollars per user per month, which translates to *calculator clicking * zero dollars total) we have almost 100 Monthly Active Users and over 80 Weekly Active users, and a vibrant community of Self-Service Analysts looking to get more value from the data. The greatest piece of news is that the Self-Service Analysts become more and more sophisticated in their questions. This is solid proof that, within the course of 10 months, they have greatly improved their own Data Analysis skills, and subsequently the effectiveness of their day-to-day tasks.

Within those (on average) 80WAUs, the majority is Product Owners, Business Analysts, Operations Analysts, etc., but there are also around five high level executives, including a member of the BoD.

Conclusion

The BI team and ORFIUM itself have evolved in the past few years. We started from Amazon Athena and Quicksight, and after a part of the journey with python by our side, we have established Snowflake, Airflow, dbt and Tableau as the BI stack, while adding in ORFIUM’s stack Select Star for Data Discovery and Metabase for Self-Sevice Analytics.

More info on these in upcoming posts, but we have more insights to share for the Self-Service Initiative, the Staff BI role, and the Data Culture at ORFIUM.

We are only eager to find out what the future holds for us, but at the moment we feel future-proof.

Thomas Antonakis

Senior Staff BI Analyst

LinkedIn

]]>
Orfium’s BI journey: From 0 to Hero https://www.orfium.com/data-science/orfiums-bi-journey-from-0-to-hero/ Wed, 19 Apr 2023 15:35:15 +0000 http://www.orfium.com/?p=4474

Foreword

This is the story of Business Intelligence and Analytics in Orfium, from before there was a single team member or a team itself within the company, to now, where we have a BI organization that scales, Data goals and excited plans for future projects.

Our story is a long one to tell, and one that makes us enthusiastic to tell. We’ll go through the timeline of our journey to the present day, but we fully plan to elaborate on the main points discussed today in their own articles.

We hope you’ll enjoy the ride. Buckle up, here we go!

Where we started

Before we formally introduced Business Intelligence to Orfium, there were a few BI-adjacent functions at the company. A number of Data Engineers, Operations Managers, and Finance execs created some initial insights with manual data pipelines.

These employees primarily gathered insights on two main parts of the business.

1. Operations Insights – Department and Employee Performance

To get base-level information on the performance of departments and specific employees, a crew of Data Engineers and Operations Managers with basic scripting skills came together, put together a python script, which didn’t lack bugs. The script pulled data from CSV exports from our internal software, as well as exports from AWS S3 that were provided by the DE teams, and connected them to produce a final table. All the transformations were performed within the script. No automation was initially required, as our needs were mostly for data on a monthly basis. The final table would then be loaded on Excel and analyzed through pivot tables and graphs.

This solution provided some useful insights. However, it certainly couldn’t scale along with the organization. A few problems came up along the way which could not be solved by this solution. Not the least of which, our need to have more frequent updates of daily data, to be able to view historical performance, and to join the data with important data from other sources were all reasons why Orfium seeked a bigger, smarter and more scalable solution to BI.

2. Clients Insights

Data Engineers put together a simple dashboard on Amazon QuickSight to give clients insights into the revenues we were generating for them. The data flowed from AWS S3 tables they had created, and they displayed bar charts of revenues over time, with some basic filtering. This dashboard was maintained for a couple of years but ultimately was replaced in March of 2022 with a more comprehensive solution that the BI team provided (spoiler alert: we created a BI team).

A new BI era 

In light of some of the issues mentioned above, a small team of BI Analysts was assembled to help with the increasing needs of the Operations team.

The first decision made by the BI Analysts was related to which tools to use for visualization and the ETL process. Nikos Kastrinakis, Director of Business Intelligence, had worked with Tableau previously, so he did a demo and trial with Tableau and ultimately convinced the team to use this as our visualization tool. We also use Tableau Prep as our ETL tool. The company was now storing all relevant data in AWS S3, and the Data Engineers used AWS Athena to create views that transformed the data provided into usable tables that BI could join in Tableau Prep.

During the Tableau trial, the BI team started working on the first dashboard, set to be used by the Operations department, replacing the aforementioned buggy script. We created one Dashboard to rule them all, with information on overall department performance, employee performance, and client performance. This gave users their first taste of the power of a BI team. Our goal was to answer many of their questions in one concise dashboard, complete with historical breakdowns of different types of data, and bring insights that users hadn’t seen before. The Tableau trial ended right before the Dashboard was set to launch. So of course, we purchased our first Tableau licenses for Orfium and onboarded the initial users with the launch of this Dashboard. 

It was a huge success! The Operations team was able to phase out their use of the script, stop wasting time monthly to generate reports for themselves, and were exposed to a new way of gaining insights.

Our work with the Operations team didn’t stop there. Over the next many months we continued to work with this data stack. We created further automation to bring daily data to the Operations team so they could manage departments and employees in real-time. But this introduced some new challenges we had to face. 

With the introduction of daily estimated data, the frequency of updates and the size of the views made the extracts unusable and obsolete, so we had to face the tradeoff of data freshness VS dashboard responsiveness. Most of the stakeholders were happy to wait 30 seconds more when they looked at their dashboards, knowing that they had the most up-to-date data possible. Operations needed to be more agile in their decisions and actions, so having fresh data was very important for them. To date, members of the Operations team remain the most active users of Tableau at Orfium and have been active participants in other data initiatives across the company.

The reception of these initial dashboards was amazing. The stakeholders could derive value and make smarter decisions faster, so the BI team gained confidence and trust. However, the BI team was still mainly serving the Operations department (with some requests completed for Clients and Corporate insights) but was starting to get many requests from Finance, Products, and other departments. We began to add additional BI Analysts to serve these needs. However, this was just the beginning of the creation of a larger team that could serve more customers more effectively, as we also began improving internal tech features and utilizing external solutions for ready-made software.

Where we are today

We had many questions to clear up: 

Where to store our data, how to transform them, who is responsible for these transformations, who is responsible for the ready and delivered data points, who has access and how do they get it, where do we make our analyses, how do data move around platforms and tools, how do our data customers discover our work?

Months and months of discussions between departments on all these questions lead to a series of decisions and commitments about our strategic data plans.

Where we stand now is still a transition from the previous step, as we decided to take a giant step forward, by embracing the Data Mesh Initiative. We’ll have the chance to talk about some of the terms and combinations of software that we’re about to mention in future blog posts, but we can run through the basics right now.

Our company is growing very quickly and, given the fact that we prefer being Data-(insert cliche buzzword here), the needs and requests for BI and Data analysis are growing at double the speed.

The increase in the number of BI Analysts was inevitable with the increasing requests and addition of new departments in the company that needed answers for their data questions.

By hiring more BI Analysts, we split our workforce between our two main Data customers, and thus created two BI Squads.

One is focused on finance and external client requests. We named it the Corporate squad, and it consists of a BI Manager and 2 BI Analysts. This is the team that prepares the monthly Board meeting presentation materials (P&L, Balance Sheets), and the dashboards shared with our external customers so that we can use data to demonstrate the impact of our work on their revenue and so that they get a better understanding of their performance on YouTube. This squad also undertakes many urgent ad-hoc requests on a monthly basis. This squad has a zero tolerance policy for mistakes and usually works on a monthly revision/request cycle.

The second squad is more focused on analyzing, and evaluating the performance and usage of our internal products, and connecting that info with the performance of our Operations teams, which generate the largest portion of our revenue. This squad, which happens to work from two different time zones, again consists of a BI Manager and 2 BI Analysts, and has more frequent deadlines, as new features come up very fast and need evaluation. The nature of data and continuous evolution of the data model results in less robust data.

In the meantime, we had already realized that we had to bulletproof our infrastructure and technical skills before scale gets to us. We decided to have some team members focused on delivering value by creating useful analyses, as described above, but we also reserved time and people who were more focused on paving the way for the rest of the analysts to be able to create more value, more efficiently.

We researched the community’s thoughts on this and we found the term Analytics Engineer, which seemed very close to what we were looking for. We thought this would be very important for the team and decided to go one step further and create a separate role that would be the equivalent of the Staff Engineer for software engineering teams. This role is more focused on setting the technical direction of the department, researching new technologies, consulting on the way projects should be driven, and enforcing best practices within the Analytics Chapter of the Data Unit. Quality, performance, and repeatability are the three core values that the code produced by this department should have.

We currently have a team of 8 people including the BI Director, two squads, each with one BI Manager and two BI Analysts, plus the Staff BI Analyst.

In terms of skillsets, we left python behind. Instead, we’re focusing more on writing reliable and performant SQL code and collaborating on git efficiently as a team. Our new toolkit is also more or less co-decided by the endorsement of a centralized data mesh, which is currently hosted by Snowflake. Nothing is being hosted/processed locally anymore, we develop data pipelines using SQL, apply them to our dev/staging/production environments through dbt and orchestrate the scheduling and data freshness using Airflow. We are the owners of our own Data Product, which is Orfium’s BI layer. It is a schema in Orfium’s production database where we store our fresh, quality, and documented data resulting from our processes. This set of tables connects data from other teams’ data products (internal products, external reports, data science results) and creates interoperating tables. These tables are the base for all our Tableau Dashboards, and help other teams use curated data without having to reinvent the wheel of the Orfium data model on their own again. Our Data product and our Tableau sites with all of our dashboards are fully documented and enriched with metadata so that our data discovery tool Select Star allows stakeholders to search and find all aspects of our work.

The future

Data Mesh was a big bet for Orfium and we will continue to build on it. The principles are applied and we are in the process of onboarding all departments on the initiative to be able to take advantage of the outcomes to the fullest extent. When this hard process is completed, all teams will enjoy the centralized data and the interoperability that derives from it, the Domain-Driven Data ownership, ensuring the agreed levels of quality on data, and it will help Orfium become more data-powered.

In addition to the obvious outcomes of applying the Data Mesh principles in a company, we believe that we need to follow up with two more major bets.

We decided to initiate, propose and promote Data Culture in Orfium. This is a very big project and is so deep that all employees need to get out of their comfort zones to achieve it. We need to change the way we work, to start planting data seeds very early in all our projects, products, initiatives, and working behavior so that we can eventually enjoy the results later on. This initiative will come with a Manifesto, which is being actively written and soon will be published. It will require commitment and follow-up on the principles proposed so that we achieve our vision.

Self-Service Analytics is also one of the Principles that Data Mesh is based on, and we decided to move forward emphatically with this too. Data will be generally accessible on Snowflake by everyone, but Data Analysis requires data literacy, SQL chops, and infrastructure that can host large amounts of data in an analysis. We decided to use Metabase as the proxy and facilitator for Self-Service Analytics. It provides the infrastructure by analyzing the data server-side, and not locally, and its query builder for creating questions is an excellent tool to create no-code analyses. Surely, it is not as customizable as SQL, but it will cover 85% of business users’ needs with superior usability for non-technical users.

This leaves us with data literacy and consultancy. For this, we have set up a library of best practices, examples, tutorials, and courses explaining how to handle business questions, analyses, limitations of tools, etc. At Orfium we always want to take a step further though, and we have been working to formulate a new role that will provide in-depth consultancy on data issues. This role will act as a general family doctor, who you know personally and trust, and will handle all incoming requests on data problems. Even if the data doctor cannot directly help you, they can direct you to a more suitable “doctor”, a set of more specialized experts, each one in their data sector (Infrastructure, SQL, Data Visualization, Analysis Requirement, you name it).

To infinite data and beyond

What a journey this has been over the last 3-4 years for BI in Orfium! We have gone through a lot, from not having official Business Intelligence to a BI team that has plans, adds value for the organization and inspires all teams to embrace the data-driven lifestyle. We’ve done a great job so far, and we have great plans for the future too.

It’s a long way to the top, if you want to rock and roll, ACDC :zap:

Stephen Dorn

Senior Business Intelligence Analyst

LinkedIn

Thomas Antonakis

Senior Staff BI Analyst

LinkedIn

]]>
What is the shape of you, Ed Sheeran? An introduction to NER https://www.orfium.com/data-science/what-is-the-shape-of-you-ed-sheeran-an-introduction-to-ner/ Wed, 16 Nov 2022 08:14:10 +0000 http://52.91.248.125/what-is-the-shape-of-you-ed-sheeran-an-introduction-to-ner/

Introduction

He is, of course, a recording artist and a guest actor in Game of Thrones. His shape, without going into details, is pretty human. But you knew that already. You made the connection between these words and the entity they represent. However, this isn’t quite as easy and straightforward a task for a computer. Enter Named Entity Recognition (NER) to save the day. NER is essentially a way to teach a computer what words mean.

What is NER?

We can first look at the formal definition:

“NER is a subtask of information extraction that seeks to locate and classify named entities mentioned in unstructured text into pre-defined categories such as person names, organizations, locations, medical codes, time expressions, quantities, monetary values, percentages, etc.” (Wiki)

That wasn’t very helpful at first glance. Let’s try a simple example:

In 2025, John Doe traveled to Greece and visited the Acropolis, where the Parthenon is.

Given the context, we might be interested in different types of entities. If what we are after are semantics, then we simply need to understand which words signify persons, which signify places, etc.

On the other hand, in some cases we might need syntactic entities like nouns, verbs, etc.

Why NER?

Okay, now we know what NER is, but what does it do, in real life? Well, plenty of things:

  • Efficient searching: This applies to any service that uses a search engine and has to answer a large number of queries. By extracting the relevant entities from a document corpus, we can split it into smaller homogeneous segments. Then, at query time we can reduce the search space and time by only looking into the most relevant segments.
  • Recommendation systems: News publishers, streaming services and online shops are just a few examples of services that could benefit from NER. Clustering articles, shows or products by the entities they contain helps a recommendation engine to deliver the great suggestions to users, based on the content they prefer.
  • Research: Each year the already tremendous volume of papers, research journals and publications increases further. Automatically identifying entities such as research areas, topics, institutions, and authors can help researchers navigate through this vast interconnected network of publications and references.

Now we’re getting somewhere. We know what NER is and have a few good ideas about where it can be used. But why and where are we at ORFIUM using it?

NER applications at ORFIUM

Text matching

In some of our services we use Natural Language Processing (NLP) methodologies to match recording or composition catalogs with other catalogs, internally and externally. NER can aid this process by extracting the most relevant industry-related entities, like song titles and artists, which can then be used as features for our current algorithms and models.

Data cleaning

The great volume of data we ingest daily often contain irrelevant and superfluous information. An example of this are YouTube catalogs, where usually video titles contain more than just song titles or artist names, and might have no useful information at all. By extracting the entities most relevant to the music industry, we essentially remove any noise, which will lead to better metadata, as well as a more trustworthy knowledge base.

Approaches and Limitations

Depending on the context and the text structure, there are various approaches that can be employed, but they usually are grouped into two general categories, each with its own strengths and drawbacks: rule-based and machine learning approaches.

In rule-based approaches, a set of rules is derived based on standard NLP strategies or domain-specific knowledge and then used to recognize possible entities. For example, names and organizations are capitalized, and dates are written in formats like YYYY/MM/DD.

  • Pros:
    • Straightforward and easy to implement for well-structured text
    • Domain knowledge can be easily integrated
    • Usually computationally fast and efficient
  • Cons:
    • Rule sets can get very large, very fast for complicated text structures, requiring a lot of work
    • General purpose rule sets not easily adaptable to specific domains
    • Changes to the text structure further complicate rule additions and interactions.

In machine learning approaches, a model is trained using a dataset annotated specifically for the task at hand. The model learns all the different ways in which relevant entities appear in text and can then be used to identify them in the future.

  • Pros:
    • Training process is domain-agnostic with easily customizable entity tags
    • Well-suited for unstructured text and easily adaptable to structure changes
    • Pre-trained models can be customized and used to speed up the training process
  • Cons:
    • The process requires large amounts of annotated entries to create a robust model
    • May require annotators with specific domain expertise
    • Training process can be costly in terms of time and money depending on the use-case

Our Project

What we wanted to accomplish was to build a baseline entity extraction process which could potentially later be used to improve our matching and other services.

Dataset

A good starting point for that would be the YouTube catalogs we ingest. These are catalogs of unmatched sound recordings. As mentioned earlier, video title structures are usually a bit chaotic. Therefore, this use case is an excellent candidate to test the potential and limitations of NER.

In the video titles, the most relevant entities that are present and we would like to identify are recording TITLE, PERSON and VERSION (remix, official video, live, etc)

We investigated both a rule-based and a machine learning approach. For their evaluation, however, we needed an annotated dataset tailored to our use case. For that reason we turned to LabelStudio and our Operations Team. LabelStudio is an open-source online data annotation tool with an intuitive UI, where we uploaded a catalog sample. The sample was split into sub-tasks which then were handled by our Operations Team.

Label Studio – Open Source Data Labeling 

At this point, we would like to say a big thank you to the Operations Team for their help. Dataset annotations are almost always quite tedious and repetitive work, but an incredibly important first step in our testing.

Rule-based approach

For the construction of our rules, we first needed to investigate whether there was any kind of structure in the video title text. We found a few patterns.

Information inside parentheses

The first thing we noticed is that when parentheses ( (), [], {} ) were present, they mostly contained featured artists or version information, like live, acoustic, remix, etc. This information was rarely found outside parentheses.

For these reasons we wrote a few simple rules for attributes inside parentheses:

  • If they contained any version keywords (live, acoustic, etc.), tag them as VERSION
  • If “feat” was present, then tag the tokens after that as PERSON

Segmentation

One other thing we noticed was that some entries could be split into segments using certain delimiters ( -, |, / ). These entries could be generally split into 2-4 segments. Also, “|” and “/” have higher priority than “-”. When split by | or /, the first segment mostly contained recording titles and sometimes also artists. When split by -, the picture was not quite as clear, since titles and artists appeared both in the first segment as well as the rest. The most prevalent case, however, was the artist appearing in the first segment and the title in the second.

Based on the above we have the following rules for splittable entries:

  • When split by | or / tokens in the first segment, they are tagged as TITLE. In the second segment, tag them as PERSON
  • When split by – tokens in the first segment are tagged as PERSON. In the second segment, tag them as TITLE

Finally tokens in entries that did not belong in the above categories, were tagged as TITLE.

Machine learning approach

Our work for the machine learning approach was much more straightforward. We decided to go with transfer learning. This is a process where we take a state-of-the-art pre-trained (usually on public and general-use datasets) model and partly extend the training with a custom dataset. This is very efficient, since we didn’t have to waste time training the model from scratch, but we still get to tailor it to our needs.

spaCy · Industrial-strength Natural Language Processing in Python 

For that purpose, we used Spacy, which is a well-established and open-source python package for NLP. It supports multiple languages and NLP algorithms, including NER. Its models are easily re-trained and integrated with few lines of code. It’s also great that there are some Spacy models optimized for accuracy and others for speed. Spacy models were retrained using the annotated dataset provided by our Operations Team.

Results

Both approaches performed very well and identified the majority of the TITLE entities. As far as PERSON and VERSION entities are concerned, the rule-based approach struggled a bit, while the machine learning one did a decent job. Below we have some examples of wrong predictions:

We also faced a few common issues with both approaches, which made their predictions less accurate.

Conclusion

Here is where today’s journey comes to an end. We had a chance to briefly introduce the concept of Named Entity Recognition, describe a few of its general and more custom uses, and learned that, despite the variety of approaches, they all come with caveats and we usually have to make compromises depending on our needs. Is our text well-structured? Are our entities generic or do they require specific domain knowledge? How do different approaches adapt to changes? Are we able to annotate our own datasets?

We also started this article with a question. Did NER help us to answer it? Our models certainly tried. Both our rule-based and machine learning approaches gave us the following result when asked to identify the entities in “Ed Sheeran – Shape of You”:

But what do we know? They seem to perform very well, so they might be right.

Theodoros Palamas

Machine Learning Researcher/Data Scientist @ ORFIUM

https://www.linkedin.com/in/theodoros-palamas-a755b623b/

]]>
Video Similarity with Self-supervised Transformer Network. https://www.orfium.com/data-science/video-similarity-with-self-supervised-transformer-network/ Wed, 02 Nov 2022 08:49:29 +0000 http://52.91.248.125/video-similarity-with-self-supervised-transformer-network/

Do you ever wonder how many times your favorite movie exists on digital platforms?

My favorite animated video when I was a child was Happy Hippo. I spent innumerable hours watching videos of the plump hippopotamus on YouTube. One thing that I remember clearly, however, is how many videos of the same clip were uploaded. Some were reaction vids, others had funny songs and photo themes in the background. Now that I am older, I am wondering: Can we figure out actually how many versions of the same video exist. Or, more scientifically, can we extract a probability score for video pair match?

Introduction

Content-based video retrieval is drawing more and more attention more and more these days. Visual content is one of the most popular types of content on the internet. And there is an incredible degree of redundancy, as we observe a high number of near or exact duplicates of all types of videos. Even common users come across the same videos in their daily use of the web, from YouTube to Tik Tok. It plays an even more important role in many video-related applications, including copyright protection, where especially movie trailers are targets for re-uploads, general piracy or just reaction videos. Usually, techniques such as zoom, crop or slight distortions are being used to differentiate the duplicate video in order not to be taken down.

Photo by Kasra Askari on Unsplash

Approaches

From Visual Spatio-Temporal Relation-Enhanced Network and Perceptual Hashing Algorithms to Fine-grained Spatio-Temporal Video Similarity Learning and Pose-Selective Max Pooling, video similarity is a popular task in the field of computer vision. While finding exact duplicates is a task which can be executed in a variety of different ways, near duplicates or videos with modifications still pose a challenge. Most video retrieval systems require a large amount of manually annotated data for training, making them costly and inefficient. To match the current rhythm of video production, an efficient self-supervised technique needs to emerge in order to tackle the space and calculation shortcomings.

What is Self-supervised Video Retrieval Transformer Network?

Based on the research effort that has been done on “Self-supervised Video Retrieval Transformer Network” (He, Xiangteng, Yulin Pan, Mingqian Tang and Yiliang Lv) we replicate the architecture with certain modifications.

To begin, we introduce the suggested Self-Supervised Video Retrieval Transformer Network (SVRTN) for effective retrieval, by decreasing the costs of manual annotation, storage space, and similarity search. As indicated in the previous image, it primarily is comprised of two components: self-supervised video representation learning and clip-level set transformer network. Initially, we use temporal and spatial adjustments to construct the video pairings automatically. Then, via contrastive learning, we use these video pairs as supervision to learn frame-level features. Finally, we use a self-attention technique to aggregate frame-level characteristics into clip-level features, using masked frame modeling to improve robustness. It leverages self-supervised learning to learn video representation from unlabeled data, and exploits transformer structure to aggregate frame-level features into clip-level.

Self-supervised video representation learning is used to learn the representation from pairs of videos and their transformations, which are generated automatically via temporal and spatial transformations, eliminating the significant costs of manual annotation. SVRTN technique can learn better video representation from a huge number of unlabeled films due to self-generation of training data, resulting in improved generalization for its learned representation.

A clip-level set transformer network is presented for aggregating frame-level data into clip-level features, resulting in significant storage space and search complexity savings. It can learn complementary and variant information from clip frame interactions via the self-attention mechanism, as well as frame permutation and missing invariant ability to manage the issue of missing frames, all of which improve the clip-level feature’s discriminating and resilience. Furthermore, it allows more flexible retrieval methods including clip-to-clip and frame-to-clip retrieval.

Self-supervised – Self-generation

After collecting a large amount of videos, temporal and spatial transformations are sequentially performed on these clips to construct the training data.

Temporal Transformations: To create the anchor clip, evenly sample N frames with a set time interval r. Then, from the anchor clip, a frame Im is chosen at random as the identical material shared by anchor clip C and positive clip C+. We consider the chosen frame to be C+’s median frame, and we sample (N1)/2 frames forward and backward with a different sample time interval r+.

Spatial Transformation: We then apply spatial transformations on each frame. Three forms of spatial transformations are explored:  Photometric transformation (a). It covers brightness, contrast, hue, saturation, and gamma adjustments, among others. Geometric transformation (b). It offers horizontal flip, rotation, crop, resize, and translation adjustments. c) Transformation editing It includes effects such as creating a blurred background, a logo, an image in picture, and so on. For the logos, we use the sample dataset of LLD-logo which consists of 5000 logos(32×32 resolution-PNG). During the training stage, we pick one transformation from each type of spatial transformation at random and apply it to frames from positive clips in order to create new positive clips.

Triplet Loss

Triplet loss is a loss function where a reference input is compared to a matching and a non-matching input. The distance from the anchor to the positive is minimized, and to the negative input is maximized. In our project, we use triplet loss instead of the contrastive loss in both Frame-level and Clip level.

Video Representation Learning – Frame-level Model

We employ the supervised video pairs to train the video representation with frame-level triplet loss, because they have been generated. To acquire the frame-level feature, a pretrained ResNet50 is used as the feature encoder, followed by a convolutional layer to lower the channel number of the feature map, and finally average pooling and L2 normalization.

By minimizing the distance between features of the anchor clip frames and positive clip frames, as well as maximizing the distance between features of the anchor/positive clip frames and negative clip frames, video representation learning aims to capture spatial structure from individual frames while ignoring the effects of various transformations.

Clip-level Set Transformer Network

Because subsequent frames from the same clip have comparable material, frame-level features are highly redundant, and supplementary information is not completely investigated. Specifically, self-supervised video representation learning is used to extract a series of frame-level features from a clip, which are then aggregated into a single clip-level feature x.

We present a modified Transformer, the clip-level set transformer network, to encode the clip-level feature. Instead of utilizing a Transformer to encode the clip-level feature directly, we use the set retrieval concept in the clip-level encoding. Without position embedding, we just utilize one encoder layer with eight attention heads. It gives our SVRTN method the following capabilities:

  1. More robust: We increase the robustness of the learned clip-level features with the ability of frame permutation and missing invariant.
  2. More flexible: We support more retrieval manners, including clip-to-clip retrieval and frame-to-clip retrieval.

We treat the frames of one clip as a set and randomly mask some frames in clip-level encoding to improve the robustness of the learnt clip-level features. We drop some frames at random from a clip C to create a new clip C’. The purpose of this exercise is to eliminate the influence of frame blur or clip cut, and to enable the model to retrieve its corresponding clips using any combination of frames in the clip. Then we use them to calculate the triplet loss.

Video Similarity Calculation

We perform shot boundary recognition on each video to segment it into shots, and then divide the shots into clips at a set time interval, i.e. N seconds. Second, to generate the clip-level feature, the sequence of successive frames is transmitted via the clip-level set transformer network. Finally, IsoHash binarizes the clip-level functionality to further reduce storage and search costs. We use hamming distance to measure clip-to-clip similarity when retrieval.

Shots are extracted with shot boundary/transition detection with the use of TransNetV2. The lift and projection version of IsoHash has been used for the binarization.

Conclusions

We used a variety of modifications to evaluate our model with videos of sports, news, animation and movies.

  • 53 Transformations:
    • Size : Crop
    • Time : Fast Forward
    • Quality : Black & White
    • Others : Reaction
  • Most efficient categories: fast, intro-outro, watermark, contrast, slow, B&W effect
  • Less efficient categories: extras, black-white, color yellow, frame insertion, colorblue, resize
  • The model performs well and is tolerant under zoom/crop. There is no direct relation between these attributes and similarity but it seems that medium levels are the most efficient.
  • There seems to be a relation between the number of shots and similarity.
  • Reduced space and calculation cost

Useful Links

Future work: Video Similarity and Alignment Learning on Partial Video Copy Detection

Possible extension: https://www.jstage.jst.go.jp/article/ipsjtcva/5/0/5_40/_article

SVD Dataset: SVD – Short Video Dataset

]]>
My Internship at ORFIUM https://www.orfium.com/people-culture/my-internship-at-orfium-2/ Tue, 16 Aug 2022 11:51:00 +0000 http://52.91.248.125/my-internship-at-orfium-2/

Why did I want to do another internship?

After interning last summer and another year of studying, I was looking forward to getting my hands on more practical matters in the orientation I wanted my career to take, which is AI Research.

So, after finishing my studies, I felt like I wanted to put all of the knowledge I just acquired to the test against real-world problems. During my early professional steps, I feel it is important to handle a wide variety of issues and learn to work with different kinds of people. It’s not just enough to do the job, I want to be able to find the equilibrium of work-life balance.

Okay, but why intern at ORFIUM?

In my previous internship, I worked for an already scaled company that dealt with generic software engineering issues. This gave me a solid understanding of the life of an engineer. I was ready for something different. I wanted to learn at a still rapidly scaling company and have a more specific role.

Before the internship, Pantelis Vikatos, head of the Research Team, and I discussed the possible projects I could help and learn from, to make them fit both my interests and the company’s goals.

As an intern, I wanted to have the chance to apply what I’ve learned from my studies and previous working experiences. At the same time, I would love to actually contribute to a company such as ORFIUM. Seeing the passion that the people that were already working here have, motivated me further and allowed me to actually realize the value of the task at hand.

My AI and research background, in combination with the open-minded, free culture of the music industry and especially ORFIUM, was just the right match.

So, how was interning at ORFIUM? 

Not being a first-time intern, I had a realistic outlook on the whole process. This time I wanted to go a step forward and contribute even more however I could. I was ready to take on even more responsibilities and do the best I could to put my skills to the test, creating a win-win scenario for both the company and me.

At the end of the day, I understood that getting the job done was not the only goal. Being as efficient and working clean while also adding to a good working environment for my colleagues and me was exactly what I was expecting from myself and the company.

And was ORFIUM a good place to intern?

Being an intern at ORFIUM surely exceeded my expectations.

From the first moments and interactions, I realized that I was in a friendly and open environment. This made the whole process flow smoothly. Everyone was there to help or answer whatever questions I had.

The company provided whatever I needed to work on a professional level in terms of equipment and infrastructure. I was given my own laptop and peripherals, as well as instructions to get my job done easily. Virtual machines and online resources were being managed and given by experts internally in order to provide whatever was needed.

This way, the internship kicked off in the best way possible. I was entrusted to lead my project my way and have my own pace without anyone doubting me to handle my responsibilities. This was enough to let me know that not only was I in an open-minded environment but my voice was also heard. 

What did I actually do at ORFIUM?

The project I was assigned to was to replicate the work done at the paper with the title “Self-supervised Video Retrieval Transformer Network”, creating a video matching mechanism.

The main objective was to answer the question: “Can we extract a probability score for video pair match?”.

The motivation behind this question was the observation of a high number of near-duplicate videos online. We would like to be able to find similar videos, which could be re-uploads, piracy content etc.

The workflow can be described as:

  1. Common state-of-the-art approaches
  2. Model Architectures
  3. Evaluation Methods
  4. Proposed Method
  5. Documentation of online sources on:
    1. State-of-the-art literature
    2. Public datasets
  6. Implementation of a Video – visual transformers deep learning model
  7. Training & Evaluation of the proposed model

After a few modifications and a lot of questions, the results were good enough to be able to deliver the trained model and the evaluations.

At that point, having completed the basic goal of my internship, I researched the extension possibilities. I also made a presentation to the rest of the team where I presented my work, explaining the process and demonstrating the results.

So, what did I learn?

After finishing my internship, I think back and reflect on the various experiences and lessons I had during these three months. There is no comparison between the practical applications on a company level and the experience in a university semester. 

First of all, I had the opportunity to see how a scaling company like ORFIUM operates. I learned about the different hierarchy of the teams and departments, their roles and responsibilities, and the processes and workflows. I experienced hands-on how a project is planned, how it is split into simple tasks and how different teams collaborate to accomplish these tasks.

Also, I had the opportunity to talk and collaborate with teams internal and external from ORFIUM. I saw, first-hand, professional experts and the way they work. We established international communications in order to handle specific matters. 

Industry-wise, I was gently introduced to the basic concepts of the music industry. I had the chance to see the variety of the different challenges that it faces. To be honest, it was even more rich and interesting than what I had imagined.

Working on my project, I learned how to start and plan a research project and how to organize my work so that I do things faster and better as a professional. I practiced more on things that I was already familiar with, and I learned a lot by asking questions about everything I thought I knew. Wrapping up my project, I learned how to produce something well documented and reusable and how to present my work to my teammates in a structured way in order to achieve company-level awareness and leave my mark.

What was the best part of the internship?

If I had to choose something I liked the most out of my experience being an intern at ORFIUM, that would be the human-centric culture they bring to the table. The easy-going but focused way of working really allows people to feel free and do the best of their efforts to contribute.

Overall, I felt like I was accepted and trusted. The project was “tailored” to my interests, and my supervisors were there to help me with whatever I needed. The constant team urge to do activities in and out of the working environment made the whole experience feel like it was not just an internship.

What are the next steps, post-internship?

My thoughts now that I’m almost done with my internship are only positive. I am going to accumulate all the experiences, the lessons learned, the people that I met and collaborated with and all the good memories in order to finish whatever is pending at my university and be able to contribute even more in the future. Hopefully, after that, I can come back to help ORFIUM scale even more along my personal growth. 🙂

Giannis Prokopiou

Data scientist intern – Research Team @ ORFIUM

]]>