This website uses cookies
We use cookies to continuously improve your experience on our site. More info.
Data Science |
|
Apache Airflow is a platform to programmatically author, schedule, and monitor workflows. |
|
AIaaS, short for Artificial Intelligence as a Service, is a cloud-based platform that provides ready-to-use AI and machine learning services and tools. It allows organizations to access and integrate AI capabilities into their applications without having to build and maintain complex AI infrastructure. |
|
AIOps, short for Artificial Intelligence for IT Operations, is a technology-driven approach that combines artificial intelligence and machine learning with IT operations to enhance automation, improve insights, and optimize the management of IT systems and processes. It is used to monitor, analyze, and manage complex IT environments, leading to more efficient and proactive operations. |
|
Artificial Intelligence Trust, Risk. and Security Management (AI TRiSM) is an emerging trend in technology set to reshape the business landscape in the near future. This framework enables organizations to identify, monitor, and mitigate risks tied to the use of AI technologies, including advanced generative and adaptive AIs, ensuring compliance with regulations and data privacy laws while fostering AI model governance, trustworthiness, fairness, reliability, robustness, efficacy, and data protection. |
|
Amazon Elastic MapReduce. A tool for big data processing and analysis. Provides a managed Hadoop and Apache Spark framework that makes it easier to process vast amounts of data across dynamically scalable Amazon EC2 instances. |
|
Amazon SageMaker is a cloud machine-learning platform. SageMaker enables developers to create, train, and deploy machine-learning models in the cloud. SageMaker also enables developers to deploy ML models on embedded systems and edge-devices. |
|
It is a software project for easier Hadoop management. It enables system administrators to provide, manage and monitor Hadoop clusters. It also provides features for installation and configuration of Hadoop services and a dashboard for monitoring cluster status. |
|
Analysis is the process of breaking a complex topic or substance into smaller parts in order to gain a better understanding of it. |
|
A data serialization system that relies on schemas for reading data. Using a schema helps cut back on serialization size. Avro also provides data structures, remote procedure call, compact binary data format and integration with dynamic languages. |
|
A unified model for defining both batch and streaming data-parallel processing pipelines. Provides a general approach to expressing embarrassingly parallel data processing pipelines and supports end users, SDK writers, and runner writers. |
|
A general-purpose data processing platform and a top-level Apache project. It provides efficient, fast, accurate, and fault tolerant handling of massive streams of events. Flink is usable for dozens of big data scenarios and capable of running in standalone mode. Its defining feature is its ability to process streaming data in real time. |
|
A distributed, reliable and high availability service for collecting, accumulating and moving to a centralized repository of large amounts of streaming data from multiple sources. |
|
A data warehouse infrastructure built on top of Hadoop. It provides tools to enable easy data ETL, a mechanism to put structures on the data, and the capability for querying and analysis of large datasets stored in Hadoop files. |
|
A machine learning framework for creating scalable applications, that can be used by data scientists, mathematicians and statisticians to implement their algorithms. Mahout also offers core algorithms that can be used for classification, clustering and batch based filtering. |
|
Apache NiFi is an integrated data logistics platform for automating the movement of data between disparate systems. It provides real-time control that makes it easy to manage the movement of data between any source and any destination. |
|
A programming framework used to analyze and transform large data sets. Apache Pig provides a high-level language known as Pig Latin which helps Hadoop developers write data analysis programs. By using various operators provided by the Pig Latin language, programmers can develop their own functions for reading, writing, and processing data. |
|
An open-source lightning-fast cluster computing technology, designed for fast computation. Has in-memory cluster computing that increases the processing speed of an app. |
|
A web-based notebook tool for interactive data analytics. Zeppelin supports different technologies that aid in analytics, such as SQL, Python and Apache Spark. Aside from analytics, it can also perform discovery, ingestion, collaboration and visualisation. |
|
The application of mathematical methods by different fields such as science, engineering, business, computer science, and industry. Thus, applied mathematics is a combination of mathematical science and specialized knowledge. |
|
In machine learning and data mining, ANNs are computer systems that resemble the neural networks that constitute a biological brain. They are designed to learn tasks based on examples. For example, they can learn to identify certain types of images based on analyzing pre-labeled examples that define images as "this is a ..." and "this is not a ...". |
|
Augmented Consumer Interfaces (ACI) refer to technologies that enhance and extend the way consumers interact with products and services using digital overlays, such as AR (Augmented Reality) and VR (Virtual Reality). These interfaces can provide immersive and interactive experiences, improving user engagement and understanding. |
|
AutoML, or Automated Machine Learning, refers to the use of automated processes and tools to streamline and simplify the process of building, training, and deploying machine learning models. It allows individuals and organizations to harness machine learning capabilities without requiring extensive expertise in data science or machine learning. |
|
Azkaban is a batch workflow job scheduler created at LinkedIn to run Hadoop jobs. Azkaban resolves the ordering through job dependencies and provides an easy to use web user interface to maintain and track your workflows. |
|
A fast, easy and collaborative Apache Spark-based big data analytics service designed for data science and data engineering. |
|
Bayesian statistics is a theory in the field of statistics based on the Bayesian interpretation of probability where probability expresses a degree of belief in an event. |