2026-03-01 03:00:06
The magic of transformers lies in their attention mechanism. But what does that actually mean?
\ Here's a simplified explanation to build intuition.
Consider: "What is the capital of France?"
As humans, we parse this as:
We process it instantly. But for a computer? Different story.
Transformers use a clever trick: for every word (technically tokens), the model creates three different representations:
Query (Q) - "What information am I looking for?"
For the word "capital," the query is something like: "What kind of entity am I describing?"
Key (K) - "What information can I provide?"
Every word gets a key that describes what it offers. For the word "capital," the key is something like: "I'm a noun describing geographic/political entities."
Value (V) - "Here's my actual meaning."
The word "capital" has the semantic meaning "main city, governmental center, and administrative importance."
The model compares the query from one word against the keys of all other words. This produces ATTENTION SCORES.
Here is what happens when the word "capital", with its query of "What kind of entity am I describing?", checks against the keys of all the other words:
Higher scores contribute more to the final understanding. So after this, the representation of "capital" is enriched with strong context from "France."
This doesn't happen just once. Transformers use multiple attention heads running in parallel, like several people reading the same sentence, each noticing different patterns. One might focus on grammar, another on meaning, another on long-range dependencies.
In another head, the word "capital" could be querying for the timeframe. In this case, the word "is" will give a high score for the current time.
All these attention scores combined give a rich context to each word. So the word "capital" knows that it is a question, it is for the current timeframe, and it is about "France."
After each attention layer, information flows through a Feed Forward Network. This is where the answers start to form. This network processes the context-enriched representations, helping build toward output predictions like 'Paris.'
The combination of attention + FFN, repeated across layers, gives transformers their power.
Unlike older models that processed words one at a time, transformers:
That's transformer attention in action.
*This explanation simplifies many technical details to focus on core concepts. For a deeper dive, check out "Attention Is All You Need" by Vaswani et al.*
2026-03-01 00:15:26
:::info
:::
Recent advancements in AI, especially deep learning, have contributed to a significant increase in the creation of new realistic-looking synthetic media (video, image, and audio) and manipulation of existing media, which has led to the creation of the new term “deepfake”. Based on both the research literature and resources in English and in Chinese, this paper gives a comprehensive overview of deepfake, covering multiple important aspects of this emerging concept, including 1) different definitions, 2) commonly used performance metrics and standards, and 3) deepfake-related datasets, challenges, competitions and benchmarks. In addition, the paper also reports a meta-review of 12 selected deepfake-related survey papers published in 2020 and 2021, focusing not only on the mentioned aspects, but also on the analysis of key challenges and recommendations. We believe that this paper is the most comprehensive review of deepfake in terms of aspects covered, and the first one covering both the English and Chinese literature and sources.
Keywords: Deepfake, Survey, Definition, Datasets, Benchmarks, Challenges, Competitions, Standards, Performance Metrics.
\
Recent advancements in AI and machine learning have increased the capability to produce more realistic media, e.g., video, image, and audio. Especially, state-of-the-art deep learning methods enabled the generation of “deepfakes”, manipulated or synthetic media the realness of which are not easily recognisable by the human eye. Although deepfake is a relatively new phenomenon (having first appeared at the end of 2017), its growth has been remarkable. According to the 2019 and 2020 Deeptrace reports on the state of deepfake [2], the number of deepfake videos in the English-speaking internet grew from 7,964 (December 2018) to 14,678 (July 2019) to 85,047 (December 2020), representing a 968% increase from 2018 to 2020.
In this work, we review existing deepfake-related research ecosystem in terms of various aspects, including performance metrics and standards, datasets, challenges, competitions, and benchmarks. Furthermore, we provide a meta-review of 12 selected deepfake-related survey papers which covers several additional aspects other than the mentioned ones in a systematic manner, such as performance comparison, key challenges, and recommendations.
Despite being a hugely popular term, there is a lack of consensus on the definition of “deepfake” and the boundary between deepfakes and non-deepfakes is not clear cut. For this survey, we adopt a relatively more inclusive approach to cover all forms of manipulated or synthetic media that are considered deepfakes in a broader sense. We also cover closely related topics including biometrics and multimedia forensics, since deepfakes are often used to launch presentation attacks against biometrics-based authentication systems and detection of deepfakes can be considered part of multimedia forensics. A more detailed discussion on different definitions of “deepfake” is given next.
\
As its name implies, the term “deepfake” is derived from the combination of “deep” (referring to deep learning (DL)) and “fake”. It is normally used to refer to manipulation of existing media (image, video and/or audio) or generation of new (synthetic) media using DL-based approaches. The most commonly discussed deepfake data are fake face images, fake speech forgeries, and fake videos that combine both fake images and fake speech forgeries. While having “fake” in the word indicates manipulated or synthesised media, there are plenty of benign applications of the deepfake technology, e.g., for entertainment and creative arts. With this respect, another term “deep synthesis” has been proposed as a more neutral-sounding alternative [60]. This new term, however, has not been widely adopted.
In addition to the lack of a universal definition, as mentioned already, the boundary between deepfakes and non-deep fakes is actually not a clear cut. There are at least two important aspects we should consider, one on detection of and the other on creation of deepfakes.
First, detection of deepfakes often follows very similar approaches to detection of traditional fakes generated without using DL techniques. Advanced detection methods have also started leveraging DL to improve their performance, but they do not necessarily need to know how a target media is created (deep or not). To some extent, one could argue that detecting deepfakes does not involve developing deepfake-specific methods (even though some researchers choose to do so), but a more robust and universal detector that can handle any (deep or not) fake media. This can be seen for two closely related topics: biometrics and multimedia forensics. For biometrics, there is a trend of using deep learning techniques to generate fake biometric signals (e.g., face images and videos) for biometric spoofing or presentation attacks. For multimedia forensics, deepfake-based forgeries have become a new threat to the traditional problem of “forgery detection”. For both topics, detection of biometric spoofing and multimedia forgeries have evolved to consider both deep and non-deep fakes.
Second, one may argue that the word “deep” in “deepfake” does not necessarily refer to the use of “deep learning”, but any “deep” (i.e., sophisticated) technology that creates a very believable fake media. For instance, Brady [9] considered deepfake as audio-visual manipulation using “a spectrum of technical sophistication … and techniques”. They also introduced two new terms, Shallowfake and Cheapfake, referring to “low level manipulation of audio-visual media created with (easily) accessible software [or no software] to speed, slow, restage or re-contextualise content”. This broader understanding of “deepfake” has also been adopted by law makers for new legislations combating malicious deepfakes. For instance, the following two United States acts define “deepfakes” as follows:
§1041.(b).(2): “the term ‘deep fake’ means an audiovisual record created or altered in a manner that the record would falsely appear to a reasonable observer to be an authentic record of the actual speech or conduct of an individual.”
§1041.(n).(3): “The term ‘deep fake’ means any video recording, motion-picture film, sound recording, electronic image, or photograph, or any technological representation of speech or conduct substantially derivative thereof—
(A) which appears to authentically depict any speech or conduct of a person who did not in fact engage in such speech or conduct; and
(B) the production of which was substantially dependent upon technical means, rather than the ability of another person to physically or verbally impersonate such person.”
As we can see from the above legal definitions of “deepfake”, the use of DL as a technology is not mentioned at all. The focus here is on “authenticity”, “impersonation” and (any) “technical means”.
\
Based on the above discussion on definitions of deepfake, we can see it is not always straightforward or meaningful to differentiate deepfakes from non-deep fakes. In addition, for our focus on performance evaluation and comparison, the boundary between deepfakes and non-deep fakes is even more blurred. This is because DL is just a special (deeper) form of machine learning (ML), and as a result, DL and non-deep ML methods share many common concepts, metrics and procedures.
Despite the fact that deepfake may be understood in a much broader sense, in this work, we have a sufficiently narrower focus to avoid covering too many topics. We, therefore, decided to define the scope of this survey as follows:
\
Research papers covered in this survey (i.e., the deepfake-related survey papers) were identified via systematic searches on the scientific databases, Scopus and China Online Journals (COJ)3. The following search queries were used to perform the searches on Scopus and COJ, respectively:
(deepfake* OR deep-fake* OR “deep fake*”) AND (review OR survey OR overview OR systemati* OR SoK)
(deepfake OR 深度伪造) AND (综述 OR 进展)
The searches returned 41 survey papers in English and 15 survey papers in Chinese. Out of these papers, eight published in English and four published in Chinese were selected for consideration.
Deepfake-related challenges, competitions and benchmarks were identified via multiple sources: the survey papers selected, research papers from the co-authors’ personal collections, Google Web searches, and manual inspection of websites of major AI-related conferences held in 2020 and 2021 (where such challenges and competitions are routinely organised). The inspected conferences include those listed in the ACL (Association for Computational Linguistics) Anthology4, ICCV, CVPR, AAAI, ICML, ICLR, KDD, SIGIR, WWW, and many others. In addition, a comprehensive list of datasets was compiled based on the selected survey papers and the identified challenges, competitions, and benchmarks. Relevant standards were identified mainly via research papers covered in this survey, the co-authors’ personal knowledge, and Google Web searches. For performance metrics, we covered those commonly used based on relevant standards, the survey papers, and the identified challenges, competitions, and benchmarks.
\
In this survey, we focus on performance evaluation and comparison of deepfake generation and detection methods. The metrics used for such performance evaluations are at the core of our discussions. In this section, we review the performance metrics that are commonly used to evaluate deepfake generation and detection algorithms. Note that all metrics covered in this section are also commonly used for evaluating performance of similar systems that are not for generating or detecting deepfakes. Therefore, this section can be seen as a very brief tutorial on general performance metrics.
In the last subsection, we also briefly discuss how the related performance metrics are covered in formal standards. By “formal standards”, we refer to standards defined following a formal procedure, often by one or more established standardisation bodies such as the International Organization for Standardization (ISO)5 and the International Electrotechnical Commission (IEC)6. Note that we consider a broad range of documents defined to be standards by standardisation bodies, e.g., International Telecommunication Union (ITU)7 recommendations and ISO technical reports (TRs).
\
Deepfake detection is primarily a binary classification problem. A binary classifier takes an input that is actually positive or actually negative and outputs a binary value denoting it to be predicted positive or predicted negative. For example, a deepfake detection system will take a suspected image as the input that may be actually fake or actually real and output predicted fake or predicted real.
A fundamental tool used in evaluating a binary classifier is the confusion matrix that summarises the success and failure of the classification model. On one axis are the two actual values and on the other axis are the two predicted values. The classification is successful/correct/true (true positive and true negative) when the actual and the predicted values match. It is failed/incorrect/false (false positive and false negative) when the actual and predicted values do not match. Table 1 shows the confusion matrix for a binary deepfake classifier (detector). The two cells in green, TP (the number of true positives) and TN (the number of true negatives), indicate correct prediction results, and the two cells in red, FN (the number of false negatives) and FP (the number of false positives), indicate two different types of errors when making incorrect prediction results.
\ Table 1: Confusion matrix for a binary classifier for detecting deepfake.
| | fake (predicted) | real (predicted) | |----|----|----| | fake (actual) | TP | FN | | real (actual) | FP | TN |
\ \
Based on the four fundamental values introduced in Section 3.1, i.e., TP, TN, FP and FN, we define two important performance metrics for a binary classifier – precision and recall.
Precision of a binary classifier is defined as the fraction of actually positive samples among all the predicted positives. In the confusion matrix, it is the fraction of true samples in the first column. It can be formally defined as Eq. (1).

When the “natural” ratio between positive and negative samples is significantly different from the test set, it is often useful to adjust the weight of the false positives, which leads to the weighted precision (wP) defined in Eq. (2), where α > 0 is a weight determined by the ratio between the negative and positive samples.

Recall of a binary classifier is the fraction of predicted positive samples among the actually positive samples, as shown in Eq. (3). In the confusion matrix, it is the fraction of true samples in the first row.

Let us consider an example binary classifier that predicts if an image from a database containing both deepfake and real (authentic) images is fake or not. Precision of the classifier is the fraction of correctly classified images among all images classified as deepfake. On the other hand, recall is the fraction of deepfake images identified by the classifier, among all deepfake images in the database.
\
Focusing on predicted positive samples, we can also define two metrics: true positive rate (TPR), also called correct detection rate (CDR), as the fraction of the predicted positive samples among the actually positive samples and false positive rate (FPR), also called false alarm rate (FAR), as the fraction of the predicted positive samples among the actually negative samples, as shown in Eqs. (4) and
(5). In the confusion matrix, TPR is the fraction of predicted positive samples in the first row and FPR is the fraction of predicted positive samples in the second row. Note that TPR is basically a different name for recall (Eq. (3)).

\
Similar to true and false positive rates, we can define two other rates focusing on negative predicted results: true negative rate (TNR) indicating the fraction of the predicted negative samples among the actually negative samples, and false negative rate (FNR) indicating the fraction of the predicted negative samples among the actually positive samples, as shown in Eqs. (6) and (7).

\
In some applications of binary classifiers, especially in biology and medicine, the TPR and the TNR are more commonly used, and they are often called sensitivity (TPR) and specificity (TNR). The focus of these two terms is on the two types of correctness of the predicted results. These are less used in deepfake-related research, hence, we will not refer to them in the remainder of this paper.
\
Focusing on error rates means that we need to consider the FPR and the FNR. These two rates normally conflict with each other so that reducing one rate normally leads to an increase in the other. Therefore, rather than trying to reduce both error rates at the same time, which is normally impossible, the more realistic task in practical applications is to find the right balance so that they are both below an acceptable threshold.
In some applications, such as biometrics, people are particularly interested in establishing the so-called equal error rate (EER) or crossover error rate (CER), the point where the FPR and the FNR are equal. The EER/CER is not necessarily a good metric for some applications, especially when the two types of errors are of different levels of importance, e.g., for detecting critical deepfakes (e.g., fake news that can influence how people cast their votes) we can often tolerate more false positives (false alarms) than false negatives (missed alarms).
\
In addition to the EER/CER, there are also other metrics that try to reflect both types of errors, in order to give a more balanced indication of the overall performance of a binary classifier. The two most commonly used are accuracy and F-score (also called F-measure). Both metrics can be defined based on the four fundamental values (TP, TN, FP, and FN).
Accuracy of a binary classifier is defined as the fraction of correctly predicted samples (true positives and true negatives) among the total number of samples that have been classified, as shown in Eq. (8).

The F-score of a binary classifier is actually a family of metrics. Its general form can be described based on a parameter β as defined in Eq. (9).

The most widely used edition of all F-scores is the so-called F1-score, which is effectively the F-score with β = 1. More precisely, it is defined as shown in Eq. (10).

\
Receiver operating characteristic (ROC) curves are commonly used to measure the performance of binary classifiers that output a score (or probability) of prediction.
Consider the following. Let S be the set of all test samples and let the output scores f (s) (for all s ∈ S) lie in the interval [a, b] on the real line. Let t ∈ [a, b] be a prediction threshold for the model, and assume that the classifiers works as follows for all s ∈ S:

\ It is easy to see that, for t = a, all the samples will be classified as positive, leading to FN = TN = 0 so TPR = FPR = 1; while for t = b, all the samples will be classified as negative, leading to FP = TP = 0 so TPR = FPR = 0. For other threshold values between a and b, the values of TPR and FPR will normally be between 0 and 1. By changing t from a to b continuously, we can normally get a continuous curve that describes how the TPR and FPR values change from (0,0) to (1,1) on the 2D plane. This curve is the ROC curve of the binary classifier.
For a random classifier, assuming that f (s) distributes uniformly on [a, b] for the test set, we can mathematically derive its ROC curve being the TPR = FPR line, whose area under the ROC curve (AUC) is 0.5. For a binary classifier that performs better than a random predictor, we can also mathematically prove that its AUC is always higher than 0.5, with 1 being the best possible value. Note that no binary classifier can have an AUC below 0.5, since one can simply flip the prediction result to get a better predictor with an AUC of 1 − AUC. The relationship between the ROC and the AUC is graphically illustrated in Figure 1.

\
Another widely used performance metric for binary classifiers that can return a probability score for the predicted label is log loss. For a binary classification with a true label y ∈ {0*,* 1} and an estimated probability p = Pr(y = 1), the log loss per sample is the negative log-likelihood of the classifier given the true label, defined as shown in Eq. (12).

Given a testing set with n samples, the log loss score of a binary classifier can be calculated using Eq. (13), where yi is 1 if the i-th sample is true and 0 if false, and yˆi is the predicted probability of yi = 1.

All metrics that are defined based on the four basic values TP, TN, FP and FN can be easily extended to multi-class classification by considering the prediction to be true or false individually with respect to each class. For example, if the system is classifying animals (cats, dogs, horses, lions, tigers, etc.), then a true positive prediction of an image to be of a cat, would simultaneously be true negative predictions for the remaining classes (dogs, horses, lions, tigers, etc.). If an image of a cat is incorrectly predicted to be that of a dog, it would be a false negative with respect to a cat, a false positive with respect to a dog, and a true negative with respect to all other classes.
\
By definition, the main goal of deepfakes is to make it hard or impossible for human consumers (listeners or viewers) to distinguish fake media from real media. Therefore, when evaluating the quality of deepfake media, the quality perceived by human consumers of the media is key. This calls for subjective assessment of the perceptual quality of the deepfake media as the “gold standard”. The most widely used subjective perceptual quality assessment (PQA) metric for audio-visual signals is mean opinion score (MOS), which has been widely used by the signal processing and multimedia communication communities, including digital TV and other multimedia-related consumer applications. As its name implies, MOS is calculated by averaging the subjective scores given by a number of human judges, normally following a numerical scale between 1 and 5 or between 0 and 100. MOS has been used in some deepfake-related challenges (see Section 5.2) and also for evaluating and comparing the quality (realness/naturalness) of deepfake datasets (see Section 4.6).
As a general subjective PQA metric, MOS has been standardised by the ITU8. There are also ITU standards defining more specific subjective Video Quality Assessment (VQA) metrics and the standard procedures one should follow to conduct VQA user studies, e.g., ITU-T Recommendation P.910 “Subjective video quality assessment methods for multimedia applications”9. Note that the ITU standards focus more on traditional perceptual quality, i.e., how good a signal looks or sounds, even if it looks or sounds not real (e.g., too smooth). On the other hand, for deepfakes, the focus is rather different because what matters is the realness and naturalness of the created media, i.e., how real and natural it looks or sounds, even if it is of low quality. To some extent, we can also consider realness and naturalness as a special aspect of perceptual quality.
One major problem of subjective PQA metrics like MOS is the need to recruit human judges and to have a well-controlled physical testing environment and protocol, which are not easy for many applications. To help reduce the efforts and costs of conducting PQA-related user studies, various objective PQA metrics have been proposed, where the term “objective” refers to the fact that such metrics are human-free, i.e., automatically calculated following a computational algorithm or process. Depending on whether a reference exists, such objective PQA metrics can be largely split into three categories: full-reference (FR) metrics (when the original “perfect-quality” signal is available as the reference), reduced-reference (RR) metrics (when some features of the original “perfect-quality” signal are available as the reference), and no-reference (NR) metrics (when the original signal is unavailable or such an original signal does not exist). For deepfakes, normally NR or RR metrics are more meaningful because the “fake” part of the word means that part of the whole data does not exist in the real world, hence a full reference cannot be obtained. RR metrics are still relevant because deepfakes are often produced for a target’s specific attributes (e.g., face and voice), where the reduced reference will be such attributes. NR metrics will be useful to estimate the realness and naturalness of a deepfake, simulating how a human judge would rate it in a controlled subjective PQA user study.
PQA is a very active research area and many PQA metrics have been proposed, some of which have been widely used in real-world products and services, e.g., mean squared error (MSE), peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) for FR PQA of digitalimages and videos defined as in Eqs. (14), (15), and (16), respectively, where X = {xi} n i is the reference (the original signal), Y = {yi} n i is the signal whose visual quality is assessed, n is the number of pixels in X and Y , L is the maximum possible pixel value of X and Y (e.g., 255 for 8-bit gray-scale images), c1 = (k1L) 2 and c2 = (k2L) 2 ) are two stabilising parameters (k1 = 0.01 and k2 = 0.03 by default). For more about PQA metrics for different types of multimedia signals, we refer readers to some relevant surveys [3, 51, 72].

Many of the basic performance metrics described in this section have been widely used by deepfake researchers as de facto standards, e.g., EER, log loss and MOS have been widely used in deepfake-related challenges (see Section 5). Also, the combination of precision, recall and F1-score has been widely used to assess performance of binary classifiers. While there have been a number of ITU standards on PQA to date, there does not seem to be many standardisation efforts on the performance metrics for evaluation of binary classifiers. This was the case until at least 2017, when ISO and IEC jointly set up the ISO/IEC JTC 1/SC 4210, a standardisation subcommittee (SC) focusing on AI under ISO/IEC JTC 111, the joint technical committee for standardising “information technology”.
One recent effort that ISO/IEC JTC 1/SC 42 made is to produce the ISO/IEC TR 24029-1:2021 “Artificial Intelligence (AI) – Assessment of the robustness of neural networks – Part 1: Overview”12, a technical report (TR) that systematically covers many commonly used performance assessment concepts, methods and metrics. Although the technical report has “neural networks” in its title, most performance assessment concepts, methods and metrics included are common ones for all supervised machine learning models.
In terms of performance metrics, two other ongoing work items of the ISO/IEC JTC 1/SC 42 that deserve attention are as follows:
While the ISO/IEC JTC 1/SC 42 was created very recently, another standardisation subcommittee under ISO/IEC JTC1 has a much longer history of nearly 20 years: the ISO/IEC JTC 1/SC 3715 that focuses on biometrics-related technology. This standardisation subcommittee is highly relevant for deepfake since deepfake faces can be used to spoof biometrics-based user authentication systems. In this context, the following three standards are of particular relevance:
ISO/IEC 19795-1:2021 “Information technology – Biometric performance testing and reporting – Part 1: Principles and framework”16: This standard covers general metrics about evaluating biometric systems. Two major metrics in this context are false accept rate (FAR) and false reject rate (FRR), which refer to the standard FPR and FNR, respectively. This standard also deprecates the use of single-number metrics including the EER and AUC (which were widely used in biometrics-related research in the past).
ISO/IEC 30107-1:2016 “Information technology – Biometric presentation attack detec-tion – Part 1: Framework”17: This standard defines a general framework about presentation attack detection (PAD) mechanisms, where the term “presentation attack” refers to the “presentation of an artefact or of human characteristics to a biometric capture subsystem in a fashion intended to in-terfere with system policy”. It focuses on biometric recognition systems, where a PAD mechanism is a binary classifier trying to predict presentation attacks (also called attack presentations, e.g., fake faces) as positive and bona fide (real) presentations as negative.
ISO/IEC 30107-3:2017 “Information technology – Biometric presentation attack detection – Part 3: Testing and reporting”18: This standard defines a number of special performance metrics for evaluating PAD mechanisms standardised in the ISO/IEC 30107-1:2016. Three such metrics look at error rates: attack presentation classification error rate (APCER) referring to the standard FPR, normal/bona fide presentation classification error rate (NPCER/BPCER) referring to the standard FNR, and average classification error rate (ACER) that is defined as the average of the APCER and the NPCER/BPCER. Such metrics have been used in biometrics-related challenges such as Face Anti-spoofing (Presentation Attack Detection) Challenges19. When deepfake images or videos are used to spoof a biometric system, such standardised metrics will become relevant.
\
This section provided a comprehensive summary of performance metrics used for evaluating and bench-marking binary classifiers. It is rare that all such metrics are used for a specific application. Instead, one or several are chosen based on specific needs. For a deepfake detection system as a binary classifier, many researchers have chosen to use overall metrics such as accuracy, AUC, EER and log loss, but the combination of precision, recall and F1-score is also common. Some deepfake-related challenges and competitions have introduced their own specific metrics, some of which will be described in Section 5. The use of different performance metrics can make comparison of different reported results more difficult, so we hope the expected new ISO/IEC standard particularly ISO/IEC 4213 will help.
It is worth mentioning that, in addition to evaluating performance of deepfake detectors, the introduced performance metrics for evaluating binary classifiers can also be used to evaluate performance of deepfake generation methods by considering how deepfake detectors fail. For instance, organisers of the Voice Conversion Challenge 2018 and 2020 used this approach to benchmark how well voice conversion (VC) systems can generate high-quality fake speech samples.
Another point we would like to mention is that for deepfake videos there are two levels of performance metrics: those at the frame level (metrics of each frame), and those at the video level (metrics for the whole video). Generally speaking, the latter can be obtained by averaging the former for all frames, potentially following an adaptive weighting scheme, so that more important (key) frames will be counted more.
\
In this section, we cover all deepfake-related datasets we identified from the meta-review of deepfake-related survey papers, deepfake-related challenges, competitions and benchmarks covered, one online collection of deepfake-related datasets on GitHub20, and the co-authors’ personal collections. Table 2 shows basic information about these datasets. We explain them in four categories: deepfake image datasets, deepfake video datasets, deepfake audio/speech datasets, and hybrid deepfake datasets (mainly mixed image and video datasets).
Note that many datasets of real (authentic) media were also used by deepfake researchers for two purposes. First, any detectors would need both fake and real media to demonstrate their performance. Second, real media have also been used to train deepfake generators as the training set. In this section, we include only datasets containing deepfake media, some of which contain both deepfake and real media.
Some datasets, especially those created for deepfake-related challenges and competitions, have separate subsets for training and evaluation (testing) purposes. The split is necessary for such challenges and competitions, but not very useful for people who just want to use such datasets. Therefore, in this section when introducing such datasets we will ignore that level of details and focus on the total number of data including the number of real and fake samples.

\
SwapMe and FaceSwap dataset [78]: This dataset contains 4,310 images, including 2,300 real images and 2,010 fake images created using FaceSwap21 and the SwapMe iOS app (now discontinued).
Fake Faces in the Wild (FFW) dataset [32]: This dataset contains 131,500 face images, including 78,500 images extracted from 150 videos in the FaceForensics dataset and 53,000 images extracted from 150 fake videos collected from YouTube.
generated.photos datasets22: This is a number of commercial datasets provided by the Generated Media, Inc., with up to nearly 2.7 million synthetic face images generated by StyleGAN. A free edition with 10,000 128x128 synthetic images is made available for academic research. The website also provides an interactive face generator23 and an API24. The generated.photos datasets have a good diversity: five age groups (infants, children, youth, adults, middle-aged), two genders (male and female), four ethnicities (white, black, Latino, Asian), four eye colours (brown, grey, blue, green), four hair colours (brown, black, blond, gray), three hair length (short, medium, long), facial expressions, three head poses (front facing, left facing, right facing), two emotions (joy and neutral), two face styles (natural, beautified). (According to a number of research papers we read, an earlier 100K-Faces dataset was released by generated.photos for academic research in 2018, which was used by many researchers. This dataset is not currently available any longer.)
MesoNet Deepfake Dataset [1]: This dataset includes 19,457 face images, including 7,948 deepfake images generated from on 175 forged videos collected online and 11,509 real face images collected from various online sources. (Table 2 of the paper shows the dataset size is 19,509, but the dataset downloaded from pCloud contains just 19,457 images.)
100K-Generated-Images [30]: This dataset includes 100,000 synthesised face, bedroom, car and cat images by a GAN generator trained based on real images in the FFHQ25 and LSUN26 datasets (three object types – bedrooms, cars and cats – for the latter). Note that the name “100K-Generated-Images” was not a proper one as the authors [30] just used this to name a sub-folder of their Google Drive shared space, but it was used in one of the survey papers [65].
Ding et al.’s swapped face dataset [17]: This dataset contains 420,053 images of celebrities, including 156,930 real ones downloaded using Google Image API and 263,123 fake face-swapped ones created using two different methods (Nirkin’s method and Auto-Encoder-GAN)
iFakeFaceDB [48]: This dataset includes 87,000 224x224 face images, generated by processing some StyleGAN-generated synthetic images using the GAN-fingerprint Removal approach (GANprintR) proposed by Neves et al.. It is the replaced version of the FSRemovalDB dataset, which contains 150,000 face images generated using an earlier version of GANprintR.
Faces-HQ [21]: This dataset includes 40,000 images, half real and half deepfake. The images were collected from four sources: the CelebA-HQ dataset27, the Flickr-Faces-HQ dataset28, the 100K-Faces dataset29 (not available any longer, see the description of generated.photos datasets), and thisperson-doesnotexist.com.
CelebA-Spoof [75]: This dataset includes 625,537 synthesised face images of 10,177 celebrities, with 43 rich attributes on face, illumination, environment and spoof types. The real images were selected from the CelebA dataset30. The 43 attributes include 40 for real images, covering all facial components and accessories (e.g., skin, nose, eyes, eyebrows, lip, hair, hat, eyeglass), and 3 for fake images, covering spoof types, environments and illumination conditions.
Diverse Fake Face Dataset (DFFD) [11]: This dataset contains 299,039 images, including 58,703 real images sampled from three datasets (FFHQ31, CelebA32 and FaceForensics++33) and 240,336 fake ones in four main facial manipulation types (identity swap, expression swap, attribute manipulation, and entire synthesis). The images cover two genders (male and female), a wide age groups (the majority between 21 and 50 years old), and both low- and high-quality levels.
\n
DeepfakeTIMIT [35]: This dataset contains 620 deepfake face videos, generated by face swapping without manipulation of audio, covering 32 subjects and two quality levels (high and low).
FaceForensics (FF) [55]: This dataset contains 1,004 face videos with over 500,000 frames, covering various quality levels and two types of facial manipulation. This dataset is now replaced by the larger FaceForensics++ dataset (see below).
FaceForensics++ (FF++) [56]: This dataset contains 5,000 face videos with over 1.8 million manipulated frames, including 1,000 real videos (with 509,914 frames) downloaded from YouTube, and 4,000 fake videos created using four face manipulation methods (Deepfakes, Face2Face, FaceSwap and NeuralTextures). The videos cover two genders (male and female), and three quality levels (VGA/480p, HD/720p, and FHD/1080p).
UADFV dataset [39]: This dataset contains 98 face videos, half (49) are real ones downloaded from Youtube, and the other half are fake ones generated using the FakeApp mobile application (which is now discontinued). The video dataset was created to used to demonstrate a deepfake video detection method based on detection of eye blinking behaviours, so all videos contain at least one eye-blinking event. All fake videos were created by swapping the original face in each of the real videos with the face of the actor Nicolas Cage34, thus, only one subject is represented.
Deep Fakes Dataset [10]: This dataset contains 142 “in the wild” deepfake portrait videos, collected from a range of online sources including news articles, online forums, mobile apps, and research presentations. The videos are diverse, covering the source generative model, resolution, compression, illumination, aspect-ratio, frame rate, motion, pose, cosmetics, occlusion, content, and context.
DFDC (Deepfake Detection Challenge) preview dataset [18]: This dataset contains 5,244 face videos of 66 subjects with both face and voice manipulation. It was released as a preview of the full dataset of the 2020 Deepfake Detection Challenge (DFDC, see below).
Celeb-DF v135: This dataset contains 1,203 face videos of celebrities, including 408 real videos collected from YouTube with subjects of different ages, ethic groups and genders, and 795 deepfake videos synthesised from these real videos.
Celeb-DF v2 [40]: This dataset contains 6,229 face videos of celebrities, including 590 real videos collected from YouTube with subjects of different ages, ethic groups and genders, and 5,639 deepfake videos synthesised from these real videos.
DeepFake Detection (DFD) Dataset [20]: This dataset contains 3,363 face videos, covering 28 subjects, gender, and skin colour. It was created as a joint effort between two units of Google, Inc.: Google AI36 and JigSaw37.
DeeperForensics-1.0 [27]: This dataset contains 60,000 indoor face videos (with 17.6 million frames) generated by face swapping, covering 100 subjects, four skin tones (white, black, yellow, brown), two gen-ders (male and female), different age groups (20-45), 26 nationalities, 7 different angles, 8 face expressions, and different head poses.
DFDC (Deepfake Detection Challenge) full dataset [18]: This dataset contains 128,154 face videos of 960 subjects, including 23,654 real videos from 3,426 paid actors and 104,500 deepfake videos created using eight different methods (DF-128, DF-256, MM/NN face swap, NTH, FSGAN, StyleGAN, refinement, and audio swap).
FFIW10K (Face Forensics in the Wild) dataset [79]: This dataset contains 10,000 high-quality forgery videos, with video- and face-level annotations. The dataset focuses on a more challenging case for forgery detection: each video involves one to 15 individuals, but only some (a minority of) faces are manipulated.
Korean DeepFake Detection Dataset (KoDF) [36]: This dataset contains 37,942 videos of paid subjects (395 Koreans and 8 Southeastern Asians), including 62,166 real videos and 175,776 fake ones created using six methods – FaceSwap, DeepFaceLab, FSGAN, First Order Motion Model (FOMM), Audio-driven Talking Face HeadPose (ATFHP) and Wav2Lip. The videos cover a balanced gender ratio and a wide range of age groups.
VideoForensicsHQ [23]: This dataset contains 1,737 videos with 1,666,816 frames, including 1,339,843 real frames and 326,973 fake frames generated using the Deep Video Portraits (DVP) [34] method. The original videos were obtained from three sources: the dataset used in [33], the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS) [42], and YouTube. Most videos have a resolution of 1280×720.
WildDeepfake [81]: This dataset contains 7,314 face sequences extracted from 707 deepfake videos that were collected completely from the Internet. It covers diverse scenes, multiple persons in each scene and rich facial expressions. Different from other deepfake video datasets, WildDeepfake contains only face sequences not the full videos. This makes the dataset more like between an image dataset and a video one. We decided to keep it in the video category since the selection process was still more video-focused.
\
Voice conversion (VC) is a technology that can be used to modify an audio and speech sample so that it appears as if spoken by a different (target) person than the original (source) speaker. Obviously, it can be used to generate deepfake audio/speech samples. The biennial Voice Conversion Challenge38 that started in 2016 is a major challenge series on VC. Datasets released from this challenge series are very different from other deepfake datasets: the deepfake data is not included in the original dataset created by the organisers of each challenge, but in the participant submissions (which are retargeted/fake utterances produced by VC systems built by participants). The challenge datasets also include the evaluation (listening-based) results of all submissions. Some fake utterances may be produced by DL-based VC systems, so we consider all datasets from this challenge series relevant for our purpose of this survey.
Voice Conversion Challenge 2016 database [62]: The original dataset created by the challenge organisers was derived from the DAPS (Device and Produced Speech) Dataset [47]. It contains 216 utterances (162 for training and 54 for testing) per speaker from 10 speakers. Participating teams (17) developed their own VC systems for all 25 source-target speaker pairs, and then submitted generated utterances for evaluation. At least six participating teams used DL-related techniques (LSTM, DNN) in their VC systems (see Table 2 of the result analysis paper39), so the submitted utterances can certainly be considered deepfakes.
Voice Conversion Challenge 2018 database [44]: The original dataset created by the challenge organisers was also based on the DAPS dataset. It contains 116 utterances (81 for training and 35 for testing) per speaker from 12 speakers in two different tasks (called Hub and Spoke). Participating teams (23 in total, all for Hub and 11 for Spoke) developed their own VC systems for all 16 source-target speaker pairs, and then submitted generated utterances for evaluation. Comparing with the 2016 challenge, more participating teams used DL-related techniques (e.g., WaveNet, LSTM, DNN, CycleGAN, DRM – deep relational models, and ARBM – adaptive restricted Boltzmann machines) in their VC systems.
Voice Conversion Challenge 2020 database [70]: This dataset is based on the Effective Multilingual Interaction in Mobile Environments (EMIME) dataset40, a bilingual (Finnish/English, German/English, and Mandarin/English) database. It contains 145 utterances (120 for training and 25 for testing) per speaker from 14 speakers for two different tasks (with 4 × 4 and 4 × 6 source-target speaker pairs, respectively). Participating teams (33 in total, out of which 31 for Task 1 and 28 for Task 2) developed their own VC systems for all source-target speaker pairs, and then submitted generated utterances for evaluation. Comparing with the 2018 challenge, DL-based VC systems were overwhelmingly used by almost all participating teams (WaveNet and WaveGAN among the most used DL-based building blocks).
A major set of deepfake speech datasets were created for the ASVspoof (Automatic Speaker Verification Spoofing and Countermeasures) Challenge41 (2015-2021, held biannually). The datasets for the 2019 and 2021 contain speech data that can be considered deepfakes.
ASVspoof 2019 Challenge database [67]: This dataset is based on the Voice Cloning Toolkit (VCTK) corpus42, a multi-speaker English speech database captured from 107 speakers (46 males and 61 females). Two attack scenarios were considered: logical access (LA) involving spoofed (synthetic or converted) speech, and physical access (PA) involving replay attacks of previously recorded bona fide recordings). For our purpose in this survey, the LA scenario is more relevant. The LA part of the dataset includes 12,483 bona fide (real) utterances and 108,978 spoofed utterances. Some of the spoofed speech data for the LA scenario were produced using a generative model involving DL-based techniques such as long short-term memory (LSTM)43, WaveNet [50], WaveRNN [28], WaveCycleGAN2 [58]. Note that the challenge organisers did not use the term “deepfake” explicitly, despite the fact that the DL-generated spoofed speech data can be considered as deepfakes.
ASVspoof 2021 Challenge – Logical Access Database [14]: This dataset contains bona fide and spoofed speech data for the logical access (LA) task. The challenge is still ongoing and we did not find a detailed paper on the dataset, so cannot include more details other than its size (7.8 GB after compression). Although we did not see details of the generative algorithms used to produce spoofed speech data, we believe similar DL-based algorithms were used like for the 2019 challenge.
ASVspoof 2021 Challenge – Speech Deepfake Database [15]: In 2021, the challenge included an explicitly defined track on deepfake, but the task description suggests that the organisers of the challenge considered a broader definition of the term “deepfake” by looking at spoofing human listeners rather than ASV (Automatic Speaker Verification) systems. The size of the dataset is 34.5 GB after compression.
Possibly because of the long history and wide participation of the community in the ASVspoof challenges for creating the dedicated datasets, there are very few other deepfake audio/speech datasets. One such dataset was created by a group of researcher from Baidu Research [5]. This dataset was created to demonstrate a proposed voice cloning method. It is relatively small, and contains 134 utterances, including 10 real ones, 120 cloned ones, and 4 manipulated ones. Another dataset was created by Google AI and Google News Initiative44, but it was made part of the ASVspoof 2019 dataset. This dataset contains thousands of phrases spoken by 68 synthetic “voices” covering a variety of regional accents.
\
NIST OpenMFC (Open Media Forensics Challenge) Datasets45: These datasets were created by the DARPA Media Forensics (MediFor) Program46 for the 2020 OpenMFC47. There are two GAN-generated deepfake datasets, one with more than 1,000 deepfake images and the other with over 100 deepfake videos. The datasets were made available to registered participants of the competition only.
ForgeryNet [25]: This dataset is named as “a versatile benchmark for comprehensive forgery analysis”. It contains 2,896,062 images and 221,247 videos, including 1,457,861 fake images and 121,617 fake videos. The videos and images cover seven image-level and eight video-level manipulation approaches, 36 different types of perturbations and more mixed perturbations, and a large number of annotation labels (6.3 million classification labels, 2.9 million manipulated area annotations and 221,247 temporal forgery segment labels). The dataset is being used for supporting the Face Forgery Analysis Challenge 202148 at the SenseHuman 2021 (3rd Workshop on Sensing, Understanding and Synthesizing Humans)49, co-located at the ICCV 2021 conference50.
\
DatasetGAN [74]: This is not actually a dataset per se, but a system for producing large datasets more automatically, including generating deepfake datasets. One may argue the automatically generated datasets are fake since they are not produced from real-world scenes.
\
As mentioned in Section 4.7, subjective quality evaluation is necessary to evaluate the realness, realisticness, and naturalness of deepfake media. While there has been very limited work on this topic, in 2020, Jiang et al. [27] conducted a user study on realness of deepfake videos. They recruited 100 professional participants (most of whom are computer vision researchers), who were asked to evaluate the realness of 30 randomly selected videos from 7 deepfake video datasets (DeeperForensics-1.0, UADFV, DeepFake-TIMIT, Celeb-DF, FaceForensics++, Deep Fake Detection, and DFDC). Participants were asked to respond to the statement “The video clip looks real.” and gave scores following a five-point Likert scale (1 – clearly disagree, 2 – weakly disagree, 3 – borderline, 4 – weakly agree, 5 – clearly agree).
Table 3 shows the results. Interestingly, we can see a huge difference between the realness levels of different datasets. What is probably quite surprising is that FaceForensics++, one of the most widely used deepfake datasets, has a very low MOS score and less than 9% of participants considered the 30 selected videos as real.
Table 3: Human-judged subjective quality (realness) of deepfake videos in 7 datasets. The MOS scores were not reported by Jiang et al., but calculated by us based on the raw data shown in Table 3 of [27].

\
Among all deepfake image and video datasets, a significant majority are about face images and videos. This is not surprising since face swapping, face attribution manipulation, and fully synthesised face images are among the hottest topics within deepfake research and real-world applications. We hope more non-face deepfake image and video datasets can be produced to support a broader range of research activities on deepfake.
The subjective quality results shown in Table 3 indicate that it is important to check realness of deep-fake media to support any performance evaluation or comparison. To ensure that the quality evaluation of datasets is fair, transparent and reliable, standard procedures need defining and a common pool of qualified human experts should be used.
Many authors of deepfake-related datasets attempted to classify such datasets into different generations. Chronologically speaking, we could broadly split such datasets into two generations: before 2019 and since 2019. Typically, datasets created before 2019 are relatively less advanced and smaller, while those created after 2019 tend to be larger, more diverse (i.e., covering more attributes), and of higher quality (i.e., produced by more advanced generative models). This can also be seen from the data in Table 3, in which the top two datasets (DeeperForensics-1 and Celeb-DF) fall within the new generation (2020), while others belong to the old generation. In addition to the two generations, a newer generation has also emerged in 2021: a number of very recent datasets started focusing on more realistic deepfakes (i.e., in the wild) or more specified areas of deepfakes (e.g., FFIW10K focusing on multiple faces in the same video, and KoDF focusing on Korean faces). This trend shows that the deepfake research community has grown significantly in the past few years so that narrower topics have also started gaining attention and interest from some researchers.
\
This section reviews initiatives aiming to advance the state-of-the-art of detection and generation of synthetic or manipulated media (such as video, image and audio) via competitions or challenges open to the public, and ongoing benchmarks tackling specific problems.
\
The Deepfake Detection Challenge (DFDC)51 was an initiative promoted by an AI and Media Steering Committee52, including BBC, Facebook, Amazon, Microsoft and New York Times, and some universities around the world including the University of Oxford. The competition remained open from 5 September 2019 till 31 March 2020, and involved 3 stages. At first, the DFDC preview dataset was released. At a later stage, the DFDC full dataset was also made available to the 2,114 participants of the competition incorporating face and audio swap techniques for generation of deepfake content. At the final stage, the submitted models were evaluated using a test dataset (referred to as the “black box dataset”) of 10,000 videos which included in-the-wild deepfake videos. The best performance on the black box dataset had an accuracy of 65.18%, according to the released results [22]. Submissions were ranked53 according to the overall log loss score, as defined in Eq. (13). All top five ranked models (the winner had the lowest overall log loss) are available on GitHub. Results indicate how challenging the detection of deepfake is since the best accuracy was low and “many submissions were simply random”, according to Dolhansky et al. [19]. Figure 2 shows a screenshot of the leaderboard with the five finalists. The first top ranked model used MTCNN (Multi-tasked Cascaded Convolutional Network), the second used WS-DAN (Weakly Supervised Data Augumentation Network), and the third used the EfficientNetB7 architecture. Meta compiling the common themes observed in the winning models, they were: clever augmentations, architectures, and absence of forensics methods. Moving forward, they called for “solutions that go beyond analysing images and video. Considering context, provenance, and other signals may be the way to improve deepfake detection models”.

\ The Automatic Speaker Verification Spoofing And Countermeasures Challenge Workshop (ASVspoof)54 has been running biennially since 2015. This competition is organised by an international consortium that includes Inria and EURECOM (France), University of Eastern Finland, National Institute of Informatics (Japan), and Institute for Infocomm Research (Singapore). This year the ASVspoof challenge includes, for the first time, a sub-challenge focused on Speech DeepFake where the envisioned use case is an adversary trying to fool a human listener. The metric used for evaluating performance of submitted solutions (i.e., classifiers) is EER. Four baseline solutions55 (also called “countermeasures”), each using a different technique, were made available to participants with their corresponding EER metric values. The ASVspoof 2021 Speech Deepfake Database containing audio recordings with original and spoofed utterances has also been made available. The competition involves three phases56: a progress phase, an evaluation phase and a post-evaluation phase; it is unclear how teams move from one phase to the next. More information about the 2021 competition is available in the published evaluation plan [13]. The organisers of the competition noted that they opted for the EER as the performance evaluation met-ric for countermeasures submitted to the speech deepfake task for legacy reasons. They acknowledged, however, that “EER reporting is deprecated ” by the ISO/IEC 19795-1:202157 standard. Despite the fact that only the 2021 ASVspoof competition contained a track explicitly related to deepfake, some data in the ASVspoof 2019 dataset (Logical Access task) used for the 2019 competition was generated using DL-based algorithms as mentioned in Section 4. We expect that this also holds for the ASVspoof 2021 dataset (Logical Access task). The ASVspoof 2019 competition used the EER as secondary metric; the primary performance metric used was the tandem detection cost function (t-DCF) [63]. According to its evaluation plan [69], t-DCF assesses the performance of the whole tandem system whereby “a CM [countermeasure] serves as a ‘gate’ to determine whether a given speech input originates from a bona fide (genuine) user, before passing it the main biometric verifier (the ASV system)”. It is calculated according to Eq. (17), where P cm (s) and P cm(s) are, respectively, “the miss rate and the false alarm rate of the CM system at threshold s”.

For further information about Eq. (17), including constants C1 and C2, please refer to the ASVspoof 2019 evaluation plan [69].
An implementation of the t-DCF metric has been made available by the ASVspoof 2019’s organisers in Python58 and Matlab59 formats.
The Face Anti-spoofing (Presentation Attack Detection) Challenge60 started in 2019. Its first two editions were held at the 2019 and 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2020), respectively. Its third edition was moved to be co-located with the 2021 IEEE/CVF International Conference on Computer Vision (ICCV 2021). This competition series was organised by a group of researchers from academia and industry in China, Mexico, Spain, Finland and the US. The 2021 competition was focused on 3D high-fidelity mask attacks, and followed a 2-phased61 process. The first phase is the “development phase”; it started in April 2021 when the CASIA-SURF HiFiMask dataset62 was released to participants. The second phase is the “final ranking phase” (June 2021), when the competition ended. The competition adopted the following performance metrics for evaluation63 of the solutions submitted: attack presentation classification error rate (APCER), normal/bona fide presentation classification error rate (NPCER/BPCER), and average classification error rate (ACER), in accordance with the ISO/IEC 30107-3:201764 standard. Figure 3 provides the leaderboard for the top three solutions.

\ The FaceForensics Benchmark65 is an ongoing automated benchmark for detection of face manipulation. The organisers of the benchmark made the FaceForensics++ dataset available for training. Manipulated videos (4,000 in total) were created using four techniques, i.e., two computer graphics-based approaches (Face2Face and FaceSwap) and two learning-based approaches (DeepFakes and Neural Textures). The deepfakes videos were generated using a slightly modified version of FaceSwap66, and the Neural Textures videos were created using the approach proposed by Thies et al. [61]. The benchmark test dataset is created from the collection of 1,000 images randomly selected from either the manipulation methods or the original videos [56]. Participants have to submit results to the benchmark, rather then code like other competitions; this is illustrated in Figure 4a. The outcome of a submission is illustrated in Figure 4b, where the scores are a measure of accuracy (Eq. (8)).

\ The Open Media Forensics Challenge (OpenMFC, formerly DARPA MFC)67 is an annual image and video forensics evaluation aiming to facilitate development of multimedia manipulation detection systems. It has been organised annually68 starting from 2017 under the name of DARPA MFC. In 2020, the National Institute of Standards and Technology (NIST) initiated the OpenMFC as a new evaluation platform, based on their previous experiences with the DARPA MFC series, to make the participation more convenient for all researchers. In OpenMFC 2020, two deepfake-related tasks were included for the first time: Image GAN Manipulation Detection (IGMD) and Video GAN Manipulation Detection (VGMD). The organisers provided an image evaluation dataset for the IGMD task, containing 1,000 images from over 200 image journals69, and a video evaluation dataset for the VGMD task, including over 100 test videos. Furthermore, they provided the datasets70 used in the previous MFC challenges as development datasets. The challenge is composed of two main phases for development and evaluation, respectively, and a pre-challenge phase for quality control testing. For evaluation of submissions, AUC-ROC is used as the primary metric. Furthermore, CDR@FAR, where CDR refers to correct detection rate or TPR (Eq. (4)) and FAR refers to false alarm rate or FPR (Eq. (5)), is also used as a metric [49]. The DeeperForensics Challenge 202071 is a deepfake face detection challenge held at the 2020 ECCV
SenseHuman Workshop72. The challenge used the DeeperForensics1.0 dataset.
The organisers provided a hidden test dataset to better simulate real-world scenarios. The challenge involved two phases: the “development phase” that started in August 2020 allowing 100 successful sub-missions, and the “final test phase” that started in October 2020 allowing 2 successful submissions until the end of the month. The submissions were evaluated using the binary cross-entropy loss (BCELoss) metric, calculated according to Eq. (18), where N is the number of videos in the hidden test set, yi is the ground truth label of video i (fake:1, real:0), and p(yi) is the predicted probability that video i is fake.

Results73 of the competition were discussed by Jiang et al. [26]. The top solution used three models, i.e., EfficientNet-B0, EfficientNet-B1 and EfficientNet-B2, for classification. The second top used EfficientNet-B5 for both an image-based model and a video-based model. The third ranked solution used a 3D convolutional neural network (3DCNN).

\ The Face Forgery Analysis Challenge 202174 is a competition hosted at the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2021). It is organised by researchers from a number of organisations in China including universities and SenseTime Research (the research arm of SenseTime75, one of the major AI “unicorns” in China). The challenge aims to advance the state-of-the-art in detection of photo-realistic manipulation of images and videos. Participants are able to use a large annotated face dataset (i.e., the ForgeryNet dataset) that was obtained by applying a number of techniques for manipulation (15) and perturbation (36) to train their solutions. The phases comprise of Forgery Image Analysis, Forgery Video Analysis, Forgery Video Temporal Localization phases, and the final phase (i.e., “private test”) where participants’ models will be tested against an unseen dataset. The following metrics will be used [25]: AUC, average precision (AP) at some “temporal Intersection over Union” (AP@tIoU) compared to a threshold t ∈ [0*.5,* 0*.*95], and average recall (AR) at K (AR@K) where K is the top K labels returned for multi-class classifiers.
The 2020 CelebA-Spoof Face Anti-Spoofing Challenge76 was hosted at the 16th European Conference on Computer Vision (ECCV 2020). The challenge ran between August and October 2020, and aimed to advance the state-of-the-art in detecting “whether a presented face is live or spoof ” [76]. The organisers made the face CelebA-Spoof dataset available for the competition containing rich annotation across a range of attributes. The competition only had one phase where participants submitted their solutions to be evaluated against a test dataset; the spoof class was considered as “positive” and the live class as “negative”. Metric TPR@FPR was used and collected at three points where the TPR when FPR = 10−4 determined the final ranking. The top three finalists (see Figure 5) used deep learning models ResNet, EfficientNet-B7, and a novel architecture combining Central Difference Convolutional Networks (CDCN) and Dual Attention Network (DAN). The two top ranked solutions used different strategies to boost their models’ performance: a heuristic voting scheme was used by the top-ranked solution, and a weight-after-sorting strategy was used by the second ranked solution.
The 2021 CSIG Challenge77 is the second edition of a challenge organised by the China Society of Image and Graphics78. The 2021 challenge has the Fake Media Forensic Challenge79 as its 6th track, co-organised by CSIG’s Digital Media Forensics and Security Technical Committee80 and Institute of Information Engineering, Chinese Academy of Sciences81. This track has two tasks, one on deepfake video detection, and the other on deepfake audio/speech detection. For the deepfake video detection task, the dataset used contains a public training set with 10,000 sound-free face videos (including 4,000 fake videos), a public test set with 20,000 face videos (the percentage of deepfake videos is unknown to participants), and a private test set that will be determined and used at the final session for selecting the winners. All videos contain faces of Eastern Asian people, and cover a wide range of parameters such as multiple resolutions and encoding quality factors, the use of blurring or sharpening filters, and added noise. Deepfake videos were created using public tools including DeepFaceLab [53], Faceswap82, Faceswap-GAN, Recycle-GAN [6] and ALAE (Adversarial Latent Autoencoders) [54]. For the deepfake audio/speech detection task, the dataset used contains a public training set with 10,000 speech samples (including 6,000 fake ones), a public test set with 20,000 face videos (the percentage of deepfake videos is unknown to participants), and a private test set for the final session (the same as the deepfake video detection task). The tools used for generating the fake speech samples include TTS (text-to-speech) voice synthesis tools and VC (voice conversion) tools. The main TTS tools used include open-source tools such as DeepVoice, TensorFlowTTS83 and GAN-TTS [8] and commercial software tools such as those from iFlytek84 and IBM. The main VC tools used include Adaptive-VC and CycleGAN-VC [29]. For both deepfake detection tasks, the performance metric used is log loss.
2020 China Artificial Intelligence85 was the second edition of a Chinese AI competition open for the general public to participate, organised by the municipal government of the City of Xiamen in China. In 2020, it had two sub-competitions, Multimedia Information Recognition Technology Competition86 and Language and Knowledge Technology Competition87. The Multimedia Information Recognition Technology Competition included two tasks on deepfakes: one on deepfake video detection88 and one on deepfake audio/speech detection89. The deepfake video detection task used 3,000 videos, and log loss was used as the sole performance metric. The deepfake audio/speech detection task used 20,000 audio samples (mostly in Chinese, and the remaining in English), and EER was used as the sole performance metric. For both tasks, the ratio between real and deepfake samples was 1:1. We did not find where to download the datasets used for the tasks nor a more detailed technical description of the datasets. For the deepfake video detection tasks, the top two winning teams (with an A prize) were from Netease (Hangzhou) Network Co., Ltd. and Beijing RealAI Technology Co., Ltd., followed by three other teams winning a B prize: Xiamen Fuyun Information Technology Co., Ltd.; Institute of Computing Technology, Chinese Academy of Sciences; and Wuhan Daqian Information Technology Co., Ltd. For the deepfake audio/speech task, there was no team winning an A prize, but one team winning a B prize: SpeakIn Technologies Co., Ltd. The final results of some teams were published, but some teams were allowed to hide their results. We did not find a detailed technical report summarising the results and explaining the work of the winning teams.
One of the B-prize winning team is from Beijing RealAI Technology Co., Ltd., a Chinese company active in deepfake-related R&D.
\
The Voice Conversion Challenge90 is a biennial competition that has been running since 2016. The challenge and the corresponding workshop, hosted at the INTERSPEECH conference91, is supported by the SynSig (Speech Synthesis Special Interest Group)92 of the International Speech Communication Association (ISCA)93. Its aim is to promote progress in voice conversion (VC) technology that can be applied to a number of positive and negative use cases, such as spoofing voice biometric systems. The 2020 challenge focused on speaker conversion, a sub-problem of VC, and included two tasks. For the first task “intra-lingual semi-parallel voice conversion”, participants had to develop 16 VC systems (speaker-pair combinations) including male and female speakers and English sentences, using the provided Voice Conversion Challenge 2020 database v1.0 for training (refer to Section 4). For the second task “cross-lingual voice conversion”, participants had to develop 24 VC systems, also including male and female speakers, but uttering sentences in three languages (Finnish, German and Mandarin), based on the provided training dataset. Figure 6 illustrates the process of training and generation of VC systems.
Submissions were evaluated for “perceived naturalness and similarity through listening tests”94. As such, the organisers used subjective evaluation [70] and recruited both native and non-native English speakers (i.e., Japanese native speakers) via crowd-sourcing for the listening tests. Naturalness (answering the question “How natural does the converted voice sound? ”) was measured using the metric MOS (covered in Section 4.6), and similarity (answering the question “how similar the converted voice sound comparing source and target speakers? ”) was measured in terms of speaker recognition as “same” or “different”, as elaborated by Wester et al. [68]. Tests also focused on the effects of language differences on the performance of VC systems submitted to the competition. The most popular CNN/RNN/GANbased VC systems submitted used WaveNet, WaveRNN, and Parallel WaveGAN. Results indicated that, in terms of similarity, the best performing VC systems were as good as natural speech but none reached human-level naturalness for task 1; scores were lower for task 2 which was more complex [70]. The organisers of the 2020 competition also used objective evaluation [12]. The metrics used for evaluation of speaker similarity were: equal error rate (EER), false acceptance rate of target (P tar fa ), miss rate of source (P src miss), and cosine similarity of speaker embedding vectors (cos-sim) according to Eq. (19) where A is the speaker embedding vectors for the converter audio and B is the speaker embedding vectors for the original audio. The performance of the VC systems as a spoof countermeasure was also evaluated using EER, while to evaluate the quality of the subjective MOS obtained via listening tests, a DL-based model to predict MOS, called MOSNet [43], was used. Lastly, to evaluate intelligibility of the converted transcribed speech, in comparison with the original transcribed speech, the word error rate (WER) [4] was used. WER is calculated according to Eq. (20) where I refers to insertions, D refers to deletions, S refers to substitutions, and N refers to the total number of words in the original transcript.
![Figure 6: Illustration of tasks for the Voice Conversion Challenge 2020, extracted from [70].](https://cdn.hackernoon.com/images/InxBRjRIs6M1kdhuWcyNHiiUrxm1-qsx3e8s.jpeg)
\

The Deepfake Africa Challenge (2021)95 is a new initiative of the AI Africa Expo, in partnership with a film and media production company (Wesgro) and the African Data Science competition platform Zindi. Its aim is “to create convincing deepfakes to highlight the power of this synthetic media, illustrating its creative potential for exploitation for both positive and negative outcomes and focusing debate about its ethical use / misuse in an African context ”. Eligible participants were required to be citizens and residents of the African continent. Submissions, accepted up to end of July 2021, can be either video or audio. Evaluation of submissions is defined in terms of artistic creativity, relevance of challenge topic, and innovation in the process of generation as long as participants use tools and packages publicly available. The top three finalists will receive a prize, present their work at the Expo, and will have to grant copyrights to Zindi. Unlike the other competitions reviewed in this section, which were focused on advancing the state-of-the-art in detection of synthetic or manipulated media, this competition focused on the generation of deepfake which seems more humanities-centred. This is a trend observed in arts [31] and culture [57].5.3 Generation and Detection of Manipulated Media
The DeepFake Game Competition (DFGC)96 is in its first edition, hosted at the 2021 International Joint Conference on Biometrics (IJCB 2021). Its organisers are mainly from the Institute of Automation Chinese Academy of Sciences (CASIA). The idea of the competition was to promote an adversarial game between agents pushing for advances in both deepfake creation and detection. In order to achieve this, a 6-stage protocol was designed interleaving three creation phase (C-phase) and detection phase (D-phase), typically one week apart; submissions closed in April 2021. Both C-phases and D-phases were bound to the Celeb-DF (v2) dataset [40], containing 6,229 videos (590 real/original videos and 5,639 fake/manipulated videos), for training purposes. As such, submissions to a C-phase would consist of datasets extracted from Celeb-DF (v2) which included novel face-swap approaches to obtain evaluation results. Submissions to a D-phase would consist of detection models/codes to obtain evaluation results. The models submitted for a D-phase were evaluated against the datasets submitted for the previous C-phase [52]. The metrics used for evaluation97 were: a detection score, used for evaluation of a D-phase, and a creation score, used for evaluation of a C-phase. The top three finalists for the detection phase employed CNN-based classifiers EfficientNet-B3, Efficientnet-B0 and EfficientNetV2.
The Detection Score (DS) metric captures the models’ ability to correctly classify fake images submitted to the previous C-phase against a set of real images in the CelebDF test dataset. It is calculated using Eq. (21), where NC is the number of valid submissions of created synthesis test sets in the last C-phase.

The Creation Score (CS) metric used to evaluate creation models submitted to this challenge is calculated by Eq. (22), where ND is the number of valid submissions of detection methods in the last D-phase, the noise score (Snoise) penalises noisy images, the other three parts of the equation relate to the following98: “ID level similarity to the donor ID, image level similarity to the target frame, and the deception ability against detection models. ID level similarity is scored by a face recognition model using dot product of two ID features (fake face ID and donor ID). The image level similarity is scored by SSIM [Structural Similarity Index] to make sure the face-swapped image is similar to the corresponding target image in content and quality ”.

Peng et al. [52] observed a commonality between the three winning teams for the creation task, i.e., the use of the FaceShifter [37] framework for face swapping. They highlighted two overall reflections about the competition: (1) the limited diversity of the deepfake datasets submitted and the use of repetitive methods to generate them, and (2) the limited size of the Celeb-DF (v2) dataset itself flagging the need for a larger dataset for next year’s competition. The organisers of the competition also applied the top two detection models to unseen datasets (DFDC and FaceForensics++) and noticed that they do not generalise well.
\
This section presents a meta-review of 12 selected deepfake-related survey papers, including eight published in English [16, 45, 46, 64–66, 71, 73] and four published in Chinese [7, 38, 41, 59]. It covers the following aspects in a systematic manner: definitions and scope, performance metrics, datasets, challenges/competitions/benchmarks, performance comparison, key challenges and recommendations.
The meta-review aims at drawing some high-level insights for monitoring future development of deepfake-related technologies and their applications.
\
As we discussed in Section 1.1, among researchers, practitioners and law makers there is no universally accepted definition of “deepfake” as a term. This is also reflected in how the authors of the 12 survey papers considered this aspect. Most authors talked about the history of deepfakes and pointed out that the term reflects the combination of “deep learning” and “fake”, but some used a broader definition, e.g., Lyu [45] defined deepfake as “high quality fake videos and audios generated by AI algorithms”. Some authors also referred to deepfake-related legislations, but none of them pointed out that the definitions in some such legislations are completely different from the more technical definitions involving the use of deep learning. No authors discussed the blurred boundary between deepfakes and non-deepfakes, although some surveys actually cover both, e.g., Tao et al. [59] focused on speech forgery and did not explicitly highlight “deepfake”.
In terms of the scope, while some authors (correctly) considered all types of media that can be produced by deepfake-related techniques [38, 41, 45, 65], some considered only a narrow scope, e.g., authors of [7, 64, 71, 73] considered only videos, and only authors of [16, 66] have considered images and videos. Another phenomenon we observed is that many authors focused more on face images and videos, and authors of three surveys [16, 64, 71] even limited the definition of “deepfake” to such a narrow scope:
Such unnecessarily narrow definitions and scopes can lead to confusion and do not help exchanges between researchers and practitioners working on different types of deepfakes.
We call on more researchers to accept a broader definition of “deepfake” so that highly realistic/natural media of any kind generated by a sophisticated automated method (often AI-based) is considered deepfake. Here, we provide two examples of such a broader definition: the image2image (or pixel2pixel) technique [80] that allows the production of deepfake images and videos of any objects (e.g., the “horse2zebra” deepfake image shown in Figure 7), and the the so-called “deepfake geography [77]”, where AI-based techniques are used to generate realistic-looking satellite images.
![Figure 7: An image of a horse (left) and a deepfake image generated using the image2image technique proposed in [78] (right).](https://cdn.hackernoon.com/images/InxBRjRIs6M1kdhuWcyNHiiUrxm1-7y113eyg.jpeg)
\ Another important fact missed or not sufficiently discussed by authors of all the 12 surveys is that deepfake techniques can be used for positive applications, e.g., creative arts, entertainment and protecting online users’ privacy. We call for more researchers and practitioners to follow the proposal in the 2020 Tencent AI White Paper [60] to start using the more neutral-sounding term “deep synthesis”. Accordingly,we can use different words for different types of data generated using “deep synthesis” techniques, e.g., “deep art”, “deep animation”, “deep music”, and “deepfake”. While authors of the 12 survey papers did not recognise the positive applications of “deepfake” technologies, some other researchers did, e.g., organisers of the Voice Conversion Challenge 202099 who said the VC technology (for speech deepfake) “is useful in many applications, such as customizing audio book and avatar voices, dubbing, movie industry, teleconferencing, singing voice modification, voice restoration after surgery, and cloning of voices of historical persons”.
\
Surprisingly, none of the 12 surveys have covered performance metrics explicitly. Some directly used performance metrics to explain and compare performance of covered deepfake generation and detection methods. The most used performance metrics include accuracy, ERR, and AUC. This may be explained by the page constraints of such survey papers, which did not allow the authors to extend their coverage significantly to cover performance metrics systematically. The subjective quality of deepfakes is an area least covered by the surveys, which seems related to an unbalanced coverage on deepfake generation and deepfake detection in terms of performance evaluation and comparison (the former much less than the latter).
\
Many of the 12 survey papers list a number of deepfake-related datasets, but none of them have coverage as complete as ours shown in Section 4. For instance, none of the surveys have covered the Voice Conversion Challenge 2016/2018/2020 datasets and the ASVspoof 2019/2021 datasets are covered briefly only in two surveys [38, 59]. In addition, more recent deepfake datasets especially those released in 2021 are also not covered by any of the surveys. We believe that our Section 4 is the most comprehensive review of deepfake-related datasets so far.
Some survey papers include datasets that are likely deepfakes, e.g., Verdoliva [66] covered many general fake image datasets where the manipulated images were not generated by deep learning or even AI-based methods, and some surveys (e.g., [38]) mentioned ASVspoof 2015 datasets but we did not see the use of deep learning for generating data used in the dataset.
\
Many surveys cover deepfake-related challenges, competitions and benchmarks. The coverage is, however, mostly limited, and some challenges (e.g., the Voice Conversion Challenge 2016/2018/2020 and the two Chinese challenges we covered in Section 5) are not covered by any of the surveys. The level of detail of challenges, competitions and benchmarks is also normally limited, compared with what we chose to include in Section 5. Similar to the datasets we covered in Section 4, we believe that our coverage of deepfake-related challenges, competitions and benchmarks in Section 5 is also the most comprehensive so far.
\
Most surveys have a good coverage of related methods for deepfake generation and detection, but only some explicitly covered performance comparison between different methods [38, 46, 64].
Among all the survey papers, Li et al. [38] conducted the most comprehensive study on performance of different deepfake detection methods. In addition to showing the performance metrics of a number of deepfake detection methods in Table 3 of [38], they also looked at general characteristics and issues of different types of deepfake detection methods, as shown in Table 4. Furthermore, they also looked at research on robustness of deepfake detection methods against adversarial samples, referring to some work that showed a lack of such robustness.
Due to quality issues of many deepfake-related datasets (discussed in Section 4.6), we need to treat any performance metrics and comparison of different detection methods with caution. Without testing all methods on a sufficiently large, diverse and high-quality deepfake dataset, the performance comparison results can be misleading. This highlights the importance of having more challenges, competitions and benchmarks to encourage performance comparison on standard datasets and using consistent performance metrics.
![Table 4: Comparison of different deepfake detection methods as shown in Table 4 of [38].](https://cdn.hackernoon.com/images/InxBRjRIs6M1kdhuWcyNHiiUrxm1-ud123ebf.jpeg)
\
The authors of some surveys identified some key challenges and future research directions for the deepfake community.
Not surprisingly, how to develop more robust, scalable, generalisable and explainable deepfake detection methods is one of the most discussed key challenges and also a major future research direction [7, 16, 38, 41, 45, 59, 65, 66, 71]. Considering the arms race between deepfake generation and detection, this research direction will likely remain the hottest topic in deepfake research.
A couple of surveys [38, 66] mentioned fusion as a key future research direction, where “fusion” refers to combining different methods (e.g., combining multiple detectors of different types) and data sources (e.g., jointly considering audio-visual analysis) to achieve better performance for deepfake detection. Lyu [45] suggested that, for detection of deepfake videos, we need to consider video-level detection more, which can be considered fusion of detection results of all video frames.
The authors of three surveys, Lyu [45] , Deshmukh and Wankhade [16] and Younus and Hasan [71], argued that better (higher-quality, more up-to-date, and more standard) deepfake datasets are needed to develop more effective deepfake detection methods. Lyu [45] also suggested that we need to consider social media laundering effects in training data and improve the evaluation of datasets. We agree with them on these points.
Tao et al. [59] suggested that low-cost deepfake generation/detection should be considered as a future research direction. This is a valid recommendation since lightweight methods will allow less powerful computing devices (e.g., IoT devices) to benefit from such technologies.
Two Chinese surveys [38, 41] also mentioned the need to have new deepfake-related legislations combating malicious use of deepfakes and the need to train end users such as journalists. This is likely an area where interdisciplinary research can grow.
There are also other ad-hoc recommendations given by the authors of some surveys. For example, Lyu [45] argued that deepfake detection should be considered a (more complicated) multi-class, multi-label and local detection problem. Tolosana et al. [64] discussed specific research directions for different deep-fake generation methods (face synthesis, identity swap, attribute manipulation, and expression swap). Liang et al. [41] and Li et al. [38] recommended more active defence mechanisms such as using digital watermarking and blockchain technologies to build trustworthy media frameworks against deepfakes.
\
The rapid growth in the capability to manipulate media or create synthetic media which look realistic and natural paved the way for deepfakes. At first, this paper adopted a critical approach to look at different definitions of the term “deepfake”. In that regard, we point out the different contradicting definitions and call for the wider community to consider how to define a new term that has a more consistent scope and meaning. For instance, replacing “deepfake” by “deep synthesis” can be more inclusive by embracing positive applications of deepfake techniques, e.g., in entertainment and for simulation purposes.
This paper provided a comprehensive overview of multiple aspects of the deepfake ecosystem drawing from the research literature and other online sources published in two languages: English and Chinese. It covers commonly used performance metrics and standards, related datasets, challenges, competitions and benchmarks. It also presents a meta-review of 12 selected deepfake-related survey papers published in 2020 and 2021, covering not only the above mentioned aspects, but also highlighting key challenges and recommendations.
\
[1] Darius Afchar, Vincent Nozick, Junichi Yamagishi, and Isao Echizen. 2018. MesoNet: A Compact Facial Video Forgery Detection Network. In Proceedings of the 2018 IEEE International Workshop on Information Forensics and Security. IEEE, 1–7. https://doi.org/10.1109/WIFS.2018.8630 761
[2] Henry Ajder, Giorgio Patrini, Francesco Cavalli, and Laurence Cullen. 2019. The State of Deepfakes: Landscape, Threats, and Impact. Deeptrace. , 27 pages. https://sensity.ai/reports/
[3] Zahid Akhtar and Tiago H. Falk. 2017. Audio-Visual Multimedia Quality Assessment: A Comprehensive Survey. IEEE Access 5 (2017), 21090–21117. https://doi.org/10.1109/ACCESS.2017.2750918
[4] Ahmed Ali and Steve Renals. 2018. Word Error Rate Estimation for Speech Recognition: e-WER. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, 20–24. https://doi.org/10.18653/v1/P18-2004
[5] Sercan O¨ . Arık, Jitong Chen, Kainan Peng, Wei Ping, and Yanqi Zhou. 2018. Neural Voice Cloning with a Few Samples. In Proceedings of the 32nd International Conference on Neural Information Processing Systems. Curran Associates Inc., 10040–10050. https://papers.nips.cc/paper/201 8/hash/4559912e7a94a9c32b09d894f2bc3c82-Abstract.html
[6] Aayush Bansal, Shugao Ma, Deva Ramanan, and Yaser Sheikh. 2018. Recycle-GAN: Unsupervised Video Retargeting. In Proceedings of the 2018 European Conference on Computer Vision. Springer, 17 pages. https://doi.org/10.1007/978-3-030-01228-1 8
[7] Yu-xuan Bao, Tian-liang Lu, and Yan-hui Du. 2020. Overview of Deepfake Video Detection Technology. Computer Science 47, 9 (2020), 283–292. https://doi.org/10.11896/jsjkx.200400130
[8] Mikol-aj Bin´kowski, Jeff Donahue, Sander Dieleman, Aidan Clark, Erich Elsen, Norman Casagrande, Luis C. Cobo, and Karen Simonyan. 2019. High Fidelity Speech Synthesis with Adversarial Networks. https://doi.org/10.48550/ARXIV.1909.11646
[9] Madeline Brady. 2020. Deepfakes: A New Desinformation Threat? Report by the Democracy Reporting International. , 9 pages. https://democracy-reporting.org/dri publications/de epfakes-a-new-disinformation-threat/
[10] Umur Aybars Ciftci, Ilke Demir, and Lijun Yin. 2020. FakeCatcher: Detection of Synthetic Portrait Videos using Biological Signals. IEEE Transactions on Pattern Analysis and Machine Intelligence (2020), 17 pages. https://doi.org/10.1109/TPAMI.2020.3009287
[11] Hao Dang, Feng Liu, Joel Stehouwer, Xiaoming Liu, and Anil K. Jain. 2020. On the Detection of Digital Face Manipulation. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 10 pages. https://doi.org/10.1109/CVPR42600.2020.00582
[12] Rohan Kumar Das, Tomi Kinnunen, Wen-Chin Huang, Zhen-Hua Ling, Junichi Yamagishi, Zhao Yi, Xiaohai Tian, and Tomoki Toda. 2020. Predictions of Subjective Ratings and Spoofing Assessments of Voice Conversion Challenge 2020 Submissions. In Proceedings of the Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020. International Speech Communication Association, 99–120. https://doi.org/10.21437/VCC BC.2020-15
[13] H´ector Delgado, Nicholas Evans, Tomi Kinnunen, Kong Aik Lee, Xuechen Liu, Andreas Nautsch, Jose Patino, Md Sahidullah, Massimiliano Todisco, Xin Wang, and Junichi Yamagishi. 2021. ASVspoof 2021: Automatic Speaker Verification Spoofing and Countermeasures Challenge Evaluation Plan. https://www.asvspoof.org/asvspoof2021/asvspoof2021 evaluation plan.pdf
[14] H´ector Delgado, Nicholas Evans, Tomi Kinnunen, Kong Aik Lee, Xuechen Liu, Andreas Nautsch, Jose Patino, Md Sahidullah, Massimiliano Todisco, Xin Wang, and Junichi Yamagishi. 2021.
ASVspoof 2021 Challenge - Logical Access Database. https://doi.org/10.5281/zenodo.4837263
[15] H´ector Delgado, Nicholas Evans, Tomi Kinnunen, Kong Aik Lee, Xuechen Liu, Andreas Nautsch, Jose Patino, Md Sahidullah, Massimiliano Todisco, Xin Wang, and Junichi Yamagishi. 2021. ASVspoof 2021 Challenge - Speech Deepfake Database. https://doi.org/10.5281/zenodo.4835108
[16] Anushree Deshmukh and Sunil B. Wankhade. 2021. Deepfake Detection Approaches Using Deep Learning: A Systematic Review. In Intelligent Computing and Networking: Proceedings of IC-ICN 2020 (Lecture Notes in Networks and Systems, Vol. 146). Springer, 293–302. https://doi.org/ 10.1007/978-981-15-7421-4 27
[17] Xinyi Ding, Zohreh Raziei, Eric C. Larson, Eli V. Olinick, Paul Krueger, and Michael Hahsler. 2020. Swapped Face Detection using Deep Learning and Subjective Assessment. EURASIP Journal on Information Security 2020, 1 (2020), 1–12. https://doi.org/10.1186/s13635-020-00109-8
[18] Brian Dolhansky, Joanna Bitton, Ben Pflaum, Jikuo Lu, Russ Howes, Menglin Wang, and Cristian Canton Ferrer. 2020. The DeepFake Detection Challenge (DFDC) Dataset. https://doi.org/10.48550/ARXIV.2006.07397
[19] Brian Dolhansky, Joanna Bitton, Ben Pflaum, Jikuo Lu, Russ Howes, Menglin Wang, and Cristian Canton Ferrer. 2020. The DeepFake Detection Challenge (DFDC) Dataset. arXiv:2006.07397. https://arxiv.org/abs/2006.07397
[20] Nick Dufour and Andrew Gully. 2019. Contributing Data to Deepfake Detection Research. Google AI Blog. https://ai.googleblog.com/2019/09/contributing-data-to-deepfake-detectio n.html
[21] Ricard Durall, Margret Keuper, Franz-Josef Pfreundt, and Janis Keuper. 2019. Unmasking Deep-Fakes with Simple Features. https://doi.org/10.48550/ARXIV.1911.00686
[22] Cristian Canton Ferrer, Brian Dolhansky, Ben Pflaum, Joanna Bitton, Jacqueline Pan, and Jikuo Lu. 2020. Deepfake Detection Challenge Results: An Open Initiative to Advance AI. Meta AI Blog. https://ai.facebook.com/blog/deepfake-detection-challenge-results-an-open-initia tive-to-advance-ai/
[23] Gereon Fox, Wentao Liu, Hyeongwoo Kim, Hans-Peter Seidel, Mohamed Elgharib, and Christian Theobalt. 2021. Videoforensicshq: Detecting High-Quality Manipulated Face Videos. In Proceedings of the 2021 IEEE International Conference on Multimedia and Expo. IEEE, 1–6. https://doi.or g/10.1109/ICME51207.2021.9428101
[24] Haiying Guan, Andrew Delgado, Yooyoung Lee, Amy N. Yates, Daniel Zhou, Timothee Kheyrkhah, and Jon Fiscus. 2021. User Guide for NIST Media Forensic Challenge (MFC) Datasets. https://doi.org/10.6028/NIST.IR.8377
[25] Yinan He, Bei Gan, Siyu Chen, Yichun Zhou, Guojun Yin, Luchuan Song, Lu Sheng, Jing Shao, and Ziwei Liu. 2021. ForgeryNet: A Versatile Benchmark for Comprehensive Forgery Analysis. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 4360–4369. https://doi.org/10.1109/CVPR46437.2021.00434
[26] Liming Jiang, Zhengkui Guo, Wayne Wu, Zhaoyang Liu, Ziwei Liu, Chen Change Loy, Shuo Yang, Yuanjun Xiong, Wei Xia, Baoying Chen, Peiyu Zhuang, Sili Li, Shen Chen, Taiping Yao, Shouhong Ding, Jilin Li, Feiyue Huang, Liujuan Cao, Rongrong Ji, Changlei Lu, and Ganchao Tan. 2021. DeeperForensics Challenge 2020 on Real-World Face Forgery Detection: Methods and Results. arXiv:2102.09471. https://arxiv.org/pdf/2102.09471.pdf
[27] Liming Jiang, Ren Li, Wayne Wu, Chen Qian, and Chen Change Loy. 2020. DeeperForensics-1.0: A Large-Scale Dataset for Real-World Face Forgery Detection. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 2886–2895. https://doi.org/ 10.1109/CVPR42600.2020.00296
[28] Nal Kalchbrenner, Erich Elsen, Karen Simonyan, Seb Noury, Norman Casagrande, Edward Lockhart, Florian Stimberg, Aaron van den Oord, Sander Dieleman, and Koray Kavukcuoglu. 2018. Efficient Neural Audio Synthesis. https://doi.org/10.48550/ARXIV.1802.08435
[29] Takuhiro Kaneko and Hirokazu Kameoka. 2017. Parallel-Data-Free Voice Conversion Using Cycle-Consistent Adversarial Networks. https://doi.org/10.48550/ARXIV.1711.11293
[30] Tero Karras, Samuli Laine, and Timo Aila. 2019. A Style-based Generator Architecture for Generative Adversarial Networks. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 4401–4410. https://doi.org/10.1109/CVPR.2019.00453
[31] Ross Kelly. 2021. Digital Art Exhibition to Showcase Creative Potential of AI. DIGIT News. https://www.digit.fyi/digital-art-exhibition-to-showcase-creative-potential-of-ai/
[32] Ali Khodabakhsh, Raghavendra Ramachandra, Kiran Raja, Pankaj Wasnik, and Christoph Busch. 2018. Fake Face Detection Methods: Can They Be Generalized?. In Proceedings of the 2018 International Conference of the Biometrics Special Interest Group. IEEE, 1–6. https://doi.org/10.2 3919/BIOSIG.2018.8553251
[33] Hyeongwoo Kim, Mohamed Elgharib, Hans-Peter Zoll¨ofer, Michael Seidel, Thabo Beeler, Christian Richardt, and Christian Theobalt. 2019. Neural Style-Preserving Visual Dubbing. ACM Transactions on Graphics 38, 6, Article 178 (2019), 13 pages. https://doi.org/10.1145/3355089.3356500
[34] Hyeongwoo Kim, Pablo Garrido, Ayush Tewari, Weipeng Xu, Justus Thies, Matthias Niessner, Patrick P´erez, Christian Richardt, Michael Zollh¨ofer, and Christian Theobalt. 2018. Deep Video Portraits. ACM Transactions on Graphics 37, 4, Article 163 (2018), 14 pages. https://doi.org/ 10.1145/3197517.3201283
[35] Pavel Korshunov and S´ebastien Marcel. 2019. Vulnerability Assessment and Detection of Deepfake Videos. In Proceedings of the 2019 International Conference on Biometrics. IEEE, 1–6. https://doi.org/10.1109/ICB45273.2019.8987375
[36] Patrick Kwon, Jaeseong You, Gyuhyeon Nam, Sungwoo Park, and Gyeongsu Chae. 2021. KoDF: A Large-scale Korean DeepFake Detection Dataset. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision. IEEE, 10724–10733. https://doi.org/10.1109/ICCV 48922.2021.01057
[37] Lingzhi Li, Jianmin Bao, Hao Yang, Dong Chen, and Fang Wen. 2020. FaceShifter: Towards High Fidelity And Occlusion Aware Face Swapping. arXiv:1912.13457. https://arxiv.org/abs/1912.13457
[38] Xurong Li, Shouling Ji, Chunming Wu, Zhenguang Liu, Shuiguang Deng, Peng Cheng, Min Yang, and Xiangwei Kong. 2021. Survey on Deepfakes and Detection Techniques. Journal of Software 32, 2 (2021), 496–518. http://www.jos.org.cn/1000-9825/6140.htm
[39] Yuezun Li, Ming-Ching Chang, and Siwei Lyu. 2018. In Ictu Oculi: Exposing AI Created Fake Videos by Detecting Eye Blinking. In Proceedings of the 2018 IEEE International Workshop on Information Forensics and Security. IEEE, 1–7. https://doi.org/10.1109/WIFS.2018.8630787
[40] Yuezun Li, Xin Yang, Pu Sun, Honggang Qi, and Siwei Lyu. 2020. Celeb-DF: A Large-Scale Challenging Dataset for DeepFake Forensics. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 3204–3213. https://doi.org/10.1109/CVPR42600. 2020.00327
[41] Ruigang Liang, Peizhuo Lv, Yue Zhao, Peng Chen, Hao Xing, Yingjun Zhang, Jizhong Han, Ran He, Xianfeng Zhao, Ming Li, and Kai Chen. 2020. A Survey of Audiovisual Deepfake Detection Techniques. Journal of Cyber Security 5, 2 (2020), 1–17. http://jcs.iie.ac.cn/xxaqxb/ch/re ader/view abstract.aspx?file no=20200202&flag=1
[42] Steven R. Livingstone and Frank A. Russo. 2018. The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A Dynamic, Multimodal Set of Facial and Vocal Expressions in North American English. PloS one 13, 5 (2018), 35 pages.
[43] Chen-Chou Lo, Szu-Wei Fu, Wen-Chin Huang, Xin Wang, Junichi Yamagishi, Yu Tsao, and Hsin-Min Wang. 2021. MOSNet: Deep Learning based Objective Assessment for Voice Conversion. arXiv:1904.08352. https://arxiv.org/pdf/1904.08352.pdf
[44] Jaime Lorenzo-Trueba, Junichi Yamagishi, Tomoki Toda, Daisuke Saito, Fernando Villavicencio, Tomi Kinnunen, and Zhenhua Ling. 2018. The Voice Conversion Challenge 2018: Promoting Development of Parallel and Nonparallel Methods. In Proceedings of the Odyssey 2018 The Speaker and Language Recognition Workshop. International Speech Communication Association, 195–202. https://doi.org/10.21437/Odyssey.2018-28
[45] Siwei Lyu. 2020. Deepfake Detection: Current Challenges and Next Steps. In Proceedings of the 2020 IEEE International Conference on Multimedia Expo Workshops. IEEE, 6 pages. https://doi.org/10.1109/ICMEW46912.2020.9105991
[46] Yisroel Mirsky and Wenke Lee. 2021. The Creation and Detection of Deepfakes: A Survey. ACM Computing Survey 54, 1, Article 7 (2021), 41 pages. https://doi.org/10.1145/3425780
[47] Gautham J. Mysore. 2015. Can we Automatically Transform Speech Recorded on Common Consumer Devices in Real-World Environments into Professional Production Quality Speech?—A Dataset, Insights, and Challenges. IEEE Signal Processing Letters 22, 8 (2015), 1006–1010. https://doi.org/10.1109/LSP.2014.2379648
[48] Jo˜ao C. Neves, Ruben Tolosana, Ruben Vera-Rodriguez, Vasco Lopes, Hugo Proen¸ca, and Julian Fierrez. 2020. GANprintR: Improved Fakes and Evaluation of the State of the Art in Face Manipulation Detection. IEEE Journal of Selected Topics in Signal Processing 14, 5 (2020), 1038–1048. https://doi.org/10.1109/JSTSP.2020.3007250
[49] NIST Media Forensics Challenge Team. 2021. Open Media Forensics Challenge 2020-2021 Evaluation Plan. https://mig.nist.gov/MFC/Web/EvalPlan2020/OpenMFC2020EvaluationPlan.pdf
[50] Aaron van den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. 2016. WaveNet: A Generative Model for Raw Audio. https://doi.org/10.48550/ARXIV.1609.03499
[51] Debajyoti Pal and Tuul Triyason. 2018. A Survey of Standardized Approaches towards the Quality of Experience Evaluation for Video Services: An ITU Perspective. International Journal of Digital Multimedia Broadcasting 2018, Article 1391724 (2018), 25 pages. https://doi.org/10.1155/20 18/1391724
[52] Bo Peng, Hongxing Fan, Wei Wang, Jing Dong, Yuezun Li, Siwei Lyu, Qi Li, Zhenan Sun, Han Chen, Baoying Chen, Yanjie Hu, Shenghai Luo, Junrui Huang, Yutong Yao, Boyuan Liu, Hefei Ling, Guosheng Zhang, Zhiliang Xu, Changtao Miao, Changlei Lu, Shan He, Xiaoyan Wu, and Wanyi Zhuang. 2021. DFGC 2021: A DeepFake Game Competition. arXiv:2106.01217. https:
[53] Ivan Perov, Daiheng Gao, Nikolay Chervoniy, Kunlin Liu, Sugasa Marangonda, Chris Um´e, Mr. Dpfks, Carl Shift Facenheim, Luis RP, Jian Jiang, Sheng Zhang, Pingyu Wu, Bo Zhou, and Weiming Zhang. 2020. DeepFaceLab: Integrated, Flexible and Extensible Face-swapping Framework. https://doi.org/10.48550/ARXIV.2005.05535
[54] Stanislav Pidhorskyi, Donald A. Adjeroh, and Gianfranco Doretto. 2020. Adversarial Latent Au-toencoders. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 10 pages. https://doi.org/10.1109/CVPR42600.2020.01411
[55] Andreas R¨ossler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, and Matthias Nießner. 2018. FaceForensics: A Large-scale Video Dataset for Forgery Detection in Human Faces. https://doi.org/10.48550/ARXIV.1803.09179
[56] Andreas R¨ossler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, and Matthias Nießner. 2019. FaceForensics++: Learning to Detect Manipulated Facial Images. In Proceedings of the 2019 International Conference on Computer Vision. IEEE, 1–11. https://doi.org/10.110 9/ICCV.2019.00009
[57] Anindya Sen. 2021. Art and Artificial Intelligence: How ‘Deepfakes’ Can Help Create Authentic Museum Experiences! Medium Blog. https://medium.com/art-world-zen/art-and-artificia l-intelligence-how-deepfakes-can-help-create-authentic-museum-experiences-65d8aa 7da29c
[58] Kou Tanaka, Hirokazu Kameoka, Takuhiro Kaneko, and Nobukatsu Hojo. 2019. WaveCycleGAN2: Time-domain Neural Post-filter for Speech Waveform Generation. https://doi.org/10.48550/A RXIV.1904.02892
[59] Jianhua Tao, Ruibo Fu, Jiangyan Yi, Chenglong Wang, and Tao Wang. 2020. Development and Challenge of Speech Forgery and Detection. Journal of Cyber Security 5, 2 (2020), 28–38. http://jcs.iie.ac.cn/xxaqxb/ch/reader/view abstract.aspx?file no=20200204&flag=1
[60] Tencent. 2020. Artificial Intelligence White Paper. https://tech.sina.com.cn/roll/2020-07-14/doc-iivhvpwx5201226.shtml
[61] Justus Thies, Michael Zollh¨ofe, and Matthias Niessner. 2019. Deferred Neural Rendering: Image Synthesis using Neural Textures. ACM Transactions on Graphics 38, Article 66 (2019), 12 pages. Issue 4. https://doi.org/10.1145/3306346.3323035
[62] Tomoki Toda, Ling-Hui Chen, Daisuke Saito, Fernando Villavicencio, Mirjam Wester, Zhizheng Wu, and Junichi Yamagishi. 2016. The Voice Conversion Challenge 2016. In Proceedings of Interspeech 2016. International Speech Communication Association, 1632–1636. https://doi.org/10.21437/Interspeech.2016-1066
[63] Massimiliano Todisco, Xin Wang, Ville Vestman, Md Sahidullah, Hector Delgado, Andreas Nautsch, Junichi Yamagishi, Nicholas Evans, Tomi Kinnunen, and Kong Aik Lee. 2019. ASVspoof 2019: Future Horizons in Spoofed and Fake Audio Detection. arXiv:1904.05441. https://arxiv.org/ pdf/1904.05441.pdf
[64] Ruben Tolosana, Ruben Vera-Rodriguez, Julian Fierrez, Aythami Morales, and Javier Ortega-Garcia. 2020. Deepfakes and beyond: A Survey of face manipulation and fake detection. Information Fusion 64 (2020), 131–148. https://doi.org/10.1016/j.inffus.2020.06.014
[65] Xin Tong, Luona Wang, Xiaoqin Pan, and Jingya Wang. 2020. An Overview of Deepfake: The Sword of Damocles in AI. In Proceedings of the 2020 International Conference on Computer Vision, Image and Deep Learning. IEEE, 265–273. https://doi.org/10.1109/CVIDL51233.2020.00-88
[66] Luisa Verdoliva. 2020. Media Forensics and DeepFakes: An Overview. IEEE Journal of Selected Topics in Signal Processing 14, 5 (2020), 910–932. https://doi.org/10.1109/JSTSP.2020.3002101
[67] Xin Wang, Junichi Yamagishi, Massimiliano Todisco, H´ector Delgado, Andreas Nautsch, Nicholas Evans, Md Sahidullah, Ville Vestman, Tomi Kinnunen, Kong Aik Lee, Lauri Juvela, Paavo Alku, Yu-Huai Peng, Hsin-Te Hwang, Yu Tsao, Hsin-Min Wang, S´ebastien Le Maguer, Markus Becker, Fergus Henderson, Rob Clark, Yu Zhang, Quan Wang, Ye Jia, Kai Onuma, Koji Mushika, Takashi Kaneda, Yuan Jiang, Li-Juan Liu, Yi-Chiao Wu, Wen-Chin Huang, Tomoki Toda, Kou Tanaka, Hirokazu Kameoka, Ingmar Steiner, Driss Matrouf, Jean-Fran¸cois Bonastre, Avashna Govender, Srikanth Ronanki, Jing-Xuan Zhang, and Zhen-Hua Ling. 2020. ASVspoof 2019: A Large-scale Public Database of Synthesized, Converted and Replayed Speech. Computer Speech & Language 64 (2020), 27 pages. https://doi.org/10.1016/j.csl.2020.101114
[68] Mirjam Wester, Zhizheng Wu, and Junichi Yamagishi. 2016. Analysis of the Voice Conversion Challenge 2016 Evaluation Results. In Proceedings of the Interspeech 2016 Conference. International Speech Communication Association, 1637–1641. https://doi.org/10.21437/Interspeech.201 6-1331
[69] Junichi Yamagishi, Massimiliano Todisco, Md Sahidullah, H´ector Delgado, Xin Wang, Nicholas Evans, Tomi Kinnunen, Kong Aik Lee, Ville Vestman, and Andreas Nautsch. 2019. ASVspoof 2019: Automatic Speaker Verification Spoofing and Countermeasures Challenge Evaluation Plan. https://www.asvspoof.org/asvspoof2019/asvspoof2019 evaluation plan.pdf
[70] Zhao Yi, Wen-Chin Huang, Xiaohai Tian, Junichi Yamagishi, Rohan Kumar Das, Tomi Kinnunen, Zhen-Hua Ling, and Tomoki Toda. 2020. Voice Conversion Challenge 2020 – Intra-lingual Semi-parallel and Cross-lingual Voice Conversion –. In Proceedings of the Joint Workshop for the Blizzard Challenge and Voice Conversion Challenge 2020. International Speech Communication Association, 80–98. https://doi.org/10.21437/VCC BC.2020-14
[71] Mohammed A. Younus and Taha M. Hasan. 2020. Abbreviated View of Deepfake Videos Detection Techniques. In Proceedings of the 2020 6th International Engineering Conference. IEEE, 115–120. https://doi.org/10.1109/IEC49899.2020.9122916
[72] Guangtao Zhai and Xiongkuo Min. 2020. Perceptual Image Quality Assessment: A Survey. Science China Information Sciences 63, Article 211301 (2020), 52 pages. https://doi.org/10.1007/s11432-019-2757-1
[73] Teng Zhang, Lirui Deng, Liang Zhang, and Xianglei Dang. 2020. Deep Learning in Face Synthesis: A Survey on Deepfakes. In Proceedings of the 2020 IEEE 3rd International Conference on Computer and Communication Engineering Technology. IEEE, 67–70. https://doi.org/10.1109/CCET5090 1.2020.9213159
[74] Yuxuan Zhang, Huan Ling, Jun Gao, Kangxue Yin, Jean-Francois Lafleche, Adela Barriuso, Antonio Torralba, and Sanja Fidler. 2021. DatasetGAN: Efficient Labeled Data Factory with Minimal Human Effort. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 10140–10150. https://doi.org/10.1109/CVPR46437.2021.01001
[75] Yuanhan Zhang, ZhenFei Yin, Yidong Li, Guojun Yin, Junjie Yan, Jing Shao, and Ziwei Liu. 2020. CelebA-Spoof: Large-Scale Face Anti-spoofing Dataset with Rich Annotations. In Proceedings of the 2020 European Conference on Computer Vision. Springer, 70–85. https://doi.org/10.1007/97 8-3-030-58610-2 5
[76] Yuanhan Zhang, Zhenfei Yin, Jing Shao, Ziwei Liu, Shuo Yang, Yuanjun Xiong, Wei Xia, Yan Xu, Man Luo, Jian Liu, Jianshu Li, Zhijun Chen, Mingyu Guo, Hui Li, Junfu Liu, Pengfei Gao, Tianqi Hong, Hao Han, Shijie Liu, Xinhua Chen, Di Qiu, Cheng Zhen, Dashuang Liang, Yufeng Jin, and Zhanlong Hao. 2021. CelebA-Spoof Challenge 2020 on Face Anti-Spoofing: Methods and Results. arXiv:2102.12642. https://arxiv.org/pdf/2102.12642.pdf
[77] Bo Zhao, Shaozeng Zhang, Chunxue Xu, Yifan Sun, and Chengbin Deng. 2021. Deep Fake Ge-ography? When Geospatial Data Encounter Artificial Intelligence. Cartography and Geographic Information Science 48, 4 (2021), 338–352. https://doi.org/10.1080/15230406.2021.1910075
[78] Peng Zhou, Xintong Han, Vlad I. Morariu, and Larry S. Davis. 2017. Two-Stream Neural Networks for Tampered Face Detection. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops. IEEE, 1831–1839. https://doi.org/10.1109/CVPRW.2017.229
[79] Tianfei Zhou, Wenguan Wang, Zhiyuan Liang, and Jianbing Shen. 2021. Face Forensics in the Wild. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition. IEEE, 5774–5784. https://doi.org/10.1109/CVPR46437.2021.00572
[80] Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A. Efros. 2017. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. In Proceedings of the 2017 IEEE International Conference on Computer Vision. IEEE, 2242–2251. https://doi.org/10.1109/ICCV.2 017.244
[81] Bojia Zi, Minghao Chang, Jingjing Chen, Xingjun Ma, and Yu-Gang Jiang. 2020. WildDeepfake: A Challenging Real-World Dataset for Deepfake Detection. In Proceedings of the 2020 28th ACM International Conference on Multimedia. ACM, 2382–2390. https://doi.org/10.1145/339417 1.3413769
\
:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.
:::
\
2026-03-01 00:02:45
How are you, hacker?
🪐 What’s happening in tech today, February 28, 2026?
The HackerNoon Newsletter brings the HackerNoon homepage straight to your inbox. On this day, we present you with these top quality stories. From Why “Small Changes” Don’t Exist in Production Game Systems to How to Navigate Identity, Direction, Story, and Sovereignty in the Age of AI, let’s dive right in.

By @ktdevjournal [ 5 Min read ] It doesn’t matter if you build games or a banking app - you don’t just have a pile of features and assets. You have an ecosystem for each bit of work Read More.

By @Lima_Writes [ 9 Min read ] When language comes back at you fast, coherent, and emotionally attuned, it feels like truth. Especially when you’re tired. Or lonely. Read More.
🧑💻 What happened in your world this week?
It's been said that writing can help consolidate technical knowledge, establish credibility, and contribute to emerging community standards. Feeling stuck? We got you covered ⬇️⬇️⬇️
ANSWER THESE GREATEST INTERVIEW QUESTIONS OF ALL TIME
We hope you enjoy this worth of free reading material. Feel free to forward this email to a nerdy friend who'll love you for it.See you on Planet Internet! With love, The HackerNoon Team ✌️

2026-03-01 00:00:21
Go 1.21 includes a preview of a change to for loop scoping that we plan to ship in Go 1.22, removing one of the most common Go mistakes.
If you’ve written any amount of Go code, you’ve probably made the mistake of keeping a reference to a loop variable past the end of its iteration, at which point it takes on a new value that you didn’t want. For example, consider this program:
func main() {
done := make(chan bool)
values := []string{"a", "b", "c"}
for _, v := range values {
go func() {
fmt.Println(v)
done <- true
}()
}
// wait for all goroutines to complete before exiting
for _ = range values {
<-done
}
}
\
The three created goroutines are all printing the same variable v, so they usually print “c”, “c”, “c”, instead of printing “a”, “b”, and “c” in some order.
\ The Go FAQ entry “What happens with closures running as goroutines?”, gives this example and remarks “Some confusion may arise when using closures with concurrency.”
\ Although concurrency is often involved, it need not be. This example has the same problem but no goroutines:
func main() {
var prints []func()
for i := 1; i <= 3; i++ {
prints = append(prints, func() { fmt.Println(i) })
}
for _, print := range prints {
print()
}
}
\ This kind of mistake has caused production problems at many companies, including a publicly documented issue at Lets Encrypt. In that instance, the accidental capture of the loop variable was spread across multiple functions and much more difficult to notice:
// authz2ModelMapToPB converts a mapping of domain name to authz2Models into a
// protobuf authorizations map
func authz2ModelMapToPB(m map[string]authz2Model) (*sapb.Authorizations, error) {
resp := &sapb.Authorizations{}
for k, v := range m {
// Make a copy of k because it will be reassigned with each loop.
kCopy := k
authzPB, err := modelToAuthzPB(&v)
if err != nil {
return nil, err
}
resp.Authz = append(resp.Authz, &sapb.Authorizations_MapElement{
Domain: &kCopy,
Authz: authzPB,
})
}
return resp, nil
}
\
The author of this code clearly understood the general problem, because they made a copy of k, but it turns out modelToAuthzPB used pointers to fields in v when constructing its result, so the loop also needed to make a copy of v.
\
Tools have been written to identify these mistakes, but it is hard to analyze whether references to a variable outlive its iteration or not. These tools must choose between false negatives and false positives. The loopclosure analyzer used by go vet and gopls opts for false negatives, only reporting when it is sure there is a problem but missing others. Other checkers opt for false positives, accusing correct code of being incorrect. We ran an analysis of commits adding x := x lines in open-source Go code, expecting to find bug fixes. Instead we found many unnecessary lines being added, suggesting instead that popular checkers have significant false positive rates, but developers add the lines anyway to keep the checkers happy.
\ One pair of examples we found was particularly illuminating:
This diff was in one program:
for _, informer := range c.informerMap {
+ informer := informer
go informer.Run(stopCh)
}
\ And this diff was in another program:
for _, a := range alarms {
+ a := a
go a.Monitor(b)
}
\ One of these two diffs is a bug fix; the other is an unnecessary change. You can’t tell which is which unless you know more about the types and functions involved.
For Go 1.22, we plan to change for loops to make these variables have per-iteration scope instead of per-loop scope. This change will fix the examples above, so that they are no longer buggy Go programs; it will end the production problems caused by such mistakes; and it will remove the need for imprecise tools that prompt users to make unnecessary changes to their code.
\
To ensure backwards compatibility with existing code, the new semantics will only apply in packages contained in modules that declare go 1.22 or later in their go.mod files. This per-module decision provides developer control of a gradual update to the new semantics throughout a codebase. It is also possible to use //go:build lines to control the decision on a per-file basis.
\
Old code will continue to mean exactly what it means today: the fix only applies to new or updated code. This will give developers control over when the semantics change in a particular package. As a consequence of our forward compatibility work, Go 1.21 will not attempt to compile code that declares go 1.22 or later. We included a special case with the same effect in the point releases Go 1.20.8 and Go 1.19.13, so when Go 1.22 is released, code written depending on the new semantics will never be compiled with the old semantics, unless people are using very old, unsupported Go versions.
Go 1.21 includes a preview of the scoping change. If you compile your code with GOEXPERIMENT=loopvar set in your environment, then the new semantics are applied to all loops (ignoring the go.mod go lines). For example, to check whether your tests still pass with the new loop semantics applied to your package and all your dependencies:
GOEXPERIMENT=loopvar go test
\ We patched our internal Go toolchain at Google to force this mode during all builds at the start of May 2023, and in the past four months we have had zero reports of any problems in production code.
\
You can also try test programs to better understand the semantics on the Go playground by including a // GOEXPERIMENT=loopvar comment at the top of the program, like in this program. (This comment only applies in the Go playground.)
Although we’ve had no production problems, to prepare for that switch, we did have to correct many buggy tests that were not testing what they thought they were, like this:
func TestAllEvenBuggy(t *testing.T) {
testCases := []int{1, 2, 4, 6}
for _, v := range testCases {
t.Run("sub", func(t *testing.T) {
t.Parallel()
if v&1 != 0 {
t.Fatal("odd v", v)
}
})
}
}
\
In Go 1.21, this test passes because t.Parallel blocks each subtest until the entire loop has finished and then runs all the subtests in parallel. When the loop has finished, v is always 6, so the subtests all check that 6 is even, so the test passes. Of course, this test really should fail, because 1 is not even. Fixing for loops exposes this kind of buggy test.
\
To help prepare for this kind of discovery, we improved the precision of the loopclosure analyzer in Go 1.21 so that it can identify and report this problem. You can see the report in this program on the Go playground. If go vet is reporting this kind of problem in your own tests, fixing them will prepare you better for Go 1.22.
\ If you run into other problems, the FAQ has links to examples and details about using a tool we’ve written to identify which specific loop is causing a test failure when the new semantics are applied.
For more information about the change, see the design document and the FAQ.
David Chase and Russ Cox
\ This article is available on The Go Blog under a CC BY 4.0 DEED license.
\ Photo by Jp Valery on Unsplash
\
2026-03-01 00:00:03
It’s just a small change!
\ How often do we hear that we need to fix something? We need to add a small feature. We need to tweak something. Code-wise or publishing, just realized they need this for retention, or maybe an analyst brought the newest data, so now we have to add just a few lines to the code. They don’t affect performance or any other departments, I promise. And it’s just like 3 minutes of coder work - why not? Fast forward: they broke the “Buy” button on the front page of the store on release.
\ Why does this always happen with small changes? Well, if we think about it, we don’t usually think about it. Let me explain:
Designers think in features and user experience. \n Engineers think in whole systems. \n Producers think in tasks. \n Stakeholders think in business outcomes.
\ And one small change is always perceived as something isolated and usually without everyone’s awareness. So, it is basically a cognitive shortcut. And that happens not because everyone is wrong or unprofessional. It’s because modern production systems are highly interconnected, so it’s impossible to know what could potentially be affected by anything - especially if you haven’t worked on this project for 15 years.

What is modern production? I’m glad you asked!
\ It doesn’t matter if you build games or a banking app - you don’t just have a pile of features and assets. You have an ecosystem for each bit of work: Art, Code, Design, UI, Marketing, Publishing (maybe even Project Management - wow, you are a rich developer), etc. And each one of them has its own infrastructure, pipelines, workflows, and shared assets. To simplify, it can be shared data schemas, builds, automation processes, UI bindings, and many other things.
\ What’s wrong if I just make a small color change to one of the icons? Well, that means you spend 3 seconds changing a color code. Then you have to assemble a build. Then QA has to check your small change to confirm that you indeed changed the color. Then you have to assemble the build again, which should be in a queue with other builds in the waiting list.
\ Then we have to update the server with your changes - oh wait, did you tell anyone about that? No? Oh, that’s great, because you just submitted your changes during the commit freeze, and now deployment engineers have to fix the CI/CD pipeline, and we have to postpone the release for 4 days because it’s Friday.
\ And by the way - we have to communicate that to users because they were waiting for this new version, and some of them decided not to wait that long and removed your app. Whoops, that’s awkward. Sorry to hear that.
\

That’s alright, I’m here to help you! Let me introduce you to Change Propagation Surface (CPS) - the number of systems, pipelines, assets, and workflows that a change must pass through before it reaches the player.
\ Your change should not be estimated by its task size, like “1 hour of work.” Your change equals CPS × Coupling Density (the amount of work other departments need to do in order for this change to pass).
\ Think about it this way:
\ Let’s go back to the situation where you want to change the color of the icon. Those 3 seconds of work would affect UI, builds, player perception, experience, and design. It might also affect color coding for accessibility rules, plus build assembling, and finally server updates. It’s high CPS - of course, if you didn’t sneak that change in without everyone’s awareness (I see that - drop it!).
\ The same goes for asset swaps or changing a stat value: it affects memory, AI tuning, destruction logic, etc. Don’t do that unless someone from senior leadership said it’s low CPS - then just do it and see how it goes.
\ You can apply this approach basically anywhere in production because it is not an abstract thing at all and can be estimated.
\ For example:
\ Each of these items counts as a plus 1 CPS factor. Subsequently, the more of the same “items” you touch, the higher the CPS number you will get. And with that information, you can create a small estimation matrix like:
CPS 1-2 - Local change \n CPS 3-5 - Cross-functional change \n CPS 6+ - Systemic change
\ One more time, the formula is: Impact = CPS × Coupling Density. Easy!

Let’s see how it works in a real-life example:
\ So your developer went on holiday and completed a math course on LinkedIn. And when he came back, he said that there is a more efficient way of calculating EXP. This change is “one line of code.” Okay, but after reading this article, you already know how it works in reality and that it touches multiple things:
\ That means CPS is more than 7. So now you see that even though the code diff is tiny, the propagation surface is systemic and has a massive potential outcome. In other words, if XP progression speeds up things like economy, availability of the content, battle pass value, retention curves, etc., you should know that even if the implementation takes about 10 minutes, the ripple effect can take weeks of work.
\ Why does live service production make it worse? Because it is amplified by content being reused across multiple features, by telemetry and economy being tightly coupled, and by systems being persistent and often requiring backward compatibility.
\ So, the real cost you pay for propagation lies in prolonged timelines, hidden rework, cross-team friction, technical debt, burnout, and eventually, people resigning directly or indirectly.
\ Instead of thinking, “Oh, this is a small change,” we should probably think, “What systems does this change touch?” Think about this as infrastructure, not a feature, and always try to bring that to cross-team awareness. And if you are capable enough, try to estimate the surface area, not just this exact small change.
\ **The whole point of my way-too-long introduction is that there is no such thing as a small change in production systems. There are only changes in misunderstood affected areas. And the more senior you become, the more your vision shifts toward understanding how this change will travel instead of trying to avoid the change altogether.
2026-02-28 23:28:23
\
:::info
:::
\n
In this work, we construct the largest dataset for multimodal pre-training in Chinese, which consists of over 1.9TB images and 292GB texts that cover a wide range of domains. We propose a cross-modal pretraining method called M6, referring to Multi-Modality to Multi-Modality Multitask Mega-transformer, for unified pretraining on the data of single modality and multiple modalities. We scale the model size up to 10 billion and 100 billion parameters, and build the largest pretrained model in Chinese. We apply the model to a series of downstream applications, and demonstrate its outstanding performance in comparison with strong baselines. Furthermore, we specifically design a downstream task of text-guided image generation, and show that the finetuned M6 can create high-quality images with high resolution and abundant details.
Multimodal Pretraining; Multitask; Text-to-Image Generation
\
Pretraining has become a focus in the research in natural language processing (NLP) [1, 2, 7, 16, 18, 19, 27, 31, 37, 44, 49]. The recent GPT-3 with over 175 billion parameters demonstrates that large models trained on big data have extremely large capacity and it can outperform the state-of-the-arts in downstream tasks especially in the zero-shot setting. Also, the rapid development of pretraining in NLP sparkles cross-modal pretraining. A number of studies [4, 11, 17, 22, 24, 25, 28, 29, 38, 51] have created new state-of-the-art performances for various cross-modal downstream tasks.
A pity is that most recent studies focus on the pretraining on English data. There are lack of both large-scale datasets in Chinese and large-scale models pretrained on the data of Chinese. Therefore, in this work, we develop a large-scale dataset M6-Corpus, which consists of over 1.9TB images and 292GB texts. To the best of our knowledge, this is the largest dataset in Chinese for pretraining in both multimodality and natural language. The dataset collected from the webpages consists of different types of data and covers a large scale of domains, including encyclopedia, question answering, forum discussion, product description, etc. Also, we design sophisticated cleaning procedures to ensure that the data are of high quality.
Furthermore, in order to sufficiently leverage such a large amount of high-quality data, we propose to build an extremely large model that can process data of multiple modalities and adapt to different types of downstream tasks. Thus we propose a novel model called M6, referring to MultiModality-to-MultiModality Multitask Mega-transformer. The model is based on the transformer, and it is pretrained with multiple tasks. Pretraining endows the model with the capability of single-modality and multimodality understanding and generation. Based on the architecture of M6, we build M6-10B and M6-100B, which are scaled up to 10 billion and 100 billion pa-rameters respectively. To be more specific, M6-100B is the recent largest model pretrained on Chinese data. We apply the model to a series of downstream applications, including product description generation, visual question answering, community question answering, Chinese poem generation, etc., and our experimental results show that M6 outperforms a series of strong baselines.
Another contribution of this work is that we first incorporate pretraining with text-to-image generation. Following Ramesh et al. [32], we leverage a two-stage framework for image generation. To be more specific, we apply a trained vector-quantized generative adversarial network to representing images with discrete image codes, and we then use the pretrained M6 to learn the relations between texts and codes. Such learning can bridge the two modalities and enables controllable text-to-image generation.
To summarize, the contributions of M6 are as follows:
We collect and develop the largest multi-modality and text dataset in Chinese for now, which is one of the key contributions of this paper. In this section, we first identify the limitations of existing datasets and then describe the construction and preprocessing procedure of our proposed dataset.
The construction of large-scale corpus with high quality and do-main coverage is crucial to Chinese pretraining. In early previous works, the Chinese Wikipedia1 is one of the most frequently used datasets to train Chinese language models. It contains 1.6GB texts (around 0.4B tokens) covering around 1M encyclopedia entries. Another corpus with a comparable size is the THUCTC[39] dataset, which includes 740K news articles. However, with the rapidly increasing capacity of recent language models, the scale of these existing datasets is clearly insufficient. Recently, Cui et al. [5] employ unreleased extended data that are 10 times larger than the CN-Wikipedia to pretrain their Chinese language model. Xu et al.
[47] released a 100GB corpus named CLUECorpus2020, which is retried from the multilingual Common Crawl dataset. However, the scale of the datasets is still insufficient to facilitate super large-scale pretraining compared with existing English pretrained models. For example, GPT-3 contains 175B parameters and is trained on 570GB texts. Meanwhile, the dataset should contain image-text pairs rather than plain texts for multi-modal pretraining.
To perform large-scale multi-modal pretraining and learn complex world knowledge in Chinese, the dataset is highly required to provide both plain texts and image-text pairs on super large scale, covering a wide range of domains. In order to perform large-scale multi-modal pretraining in Chinese, we focus on the construction of large-scale datasets in Chinese. Specifically, while we unify our pretraining for both natural language and multimodalities, we construct large datasets of both plain texts and image-text pairs. We are interested in obtaining large-scale data that covers a wide range of domains, so that it is possible for the model to learn the complex world knowledge of different fields. Also, we aim to collect data of multiple modalities for the cross-modal pretraining. This raises the difficulty for the construction of a large-scale dataset as the data for multimodal pretraining are usually image-text pairs, where in each pair the text provides a detailed description of a fraction of the image.
Though there are a tremendous amount of text resources and images on the world wide web, the corpus for multimodal pretraining is assumed to be better when satisfying the following properties:
(1). the sentences should be fluent natural language within a normal length, and should not contain meaningless tokens, such as markups, duplicate punctuation marks, random combinations of characters, etc.; (2). the images should be natural and realistic, and the resolutions of the images need to be identifiable by humans; (3). both the texts and images should not contain illegal content, such as pornography, violence, etc.; (4). the images and texts should be semantically relevant; (5). the datasets should cover a wide range of fields, say sports, politics, science, etc., and therefore it can endow the model with sufficient world knowledge.
\
Based on the requirements above, we collect data of both plain texts and image-text pairs. There are different types of data, including encyclopedia, crawled webpage, community question answering, forum, product description, etc. We present the details in Table 3. The collected corpus consists of bothag plain-texts and image-text pairs, which is compatible with the designed text-only and multi-modal pretraining tasks. Also, the data has a large coverage over domains, such as science, entertainment, sports, politics, common-sense of life, etc. We have also compared some characteristics of our corpus with existing datasets used for Chinese pretraining in Table 2. The size of our dataset is much larger than the previous ones. To our knowledge, this is the first large-scale, multimodal and multidomain corpus for Chinese pretraining.
We implement sophisticated preprocessing to obtain clean data. For text data, we first remove HTML markups and duplicate punctuation marks, and we only reserve characters and punctuation marks that are in Chinese and English. We remove the topics that are shorter than 5 characters and contents shorter than 15 characters. We further apply in-house spam detection to remove sentences that contain words related to certain political issues, pornography, or words in the list of dirty, naughty, and other bad words. In order to preserve the linguistic acceptance of the texts, we implement a language model to evaluate their perplexities, and sentences with high perplexities are discarded. Only images with at least 5000 pixels are reserved for pretraining. A sequence of classifiers and heuristic rules are applied to filter out images containing illegal content. We also use a pretrained image scorer to evaluate the qual-ities of images. For images and texts in crawled webpages, we only consider images and their surrounding text as relevant image-text pairs. Other sentences in the webpages are discarded.
\
Multimodal pretraining leverages both the power of self-attention-based transformer architecture and pretraining on large-scale data. We endeavor to endow the model with strong capability of cross-modal understanding and generation. In this section, we describe the details of our proposed pretrained model M6, which refers to Multi-Modality-to-Multi-Modality Multitask Mega-transformer.



The mainstream multimodal pretraining methods transform images to feature sequences via object detection. However, the performance of the object detectors as well as the expressivity of their backbones strongly impact the final performance of the pretrained models in the downstream tasks. We observe that a large proportion of the images contain only a few objects. Take the images of the data of e-commerce as an example. We randomly sample 1M images and perform object detection on the images. The results show that over 90% of the images contain fewer than 5 objects. Also, the objects have high overlapping with each other. To alleviate such influence, we turn to a simple but effective solution following Gao et al. [\[12\]](#bookmark28) and Dosovitskiy et al. [\[8\]](#bookmark24). In general, we split an image into patches and extract features of the 2D patches with a trained feature extractor, say ResNet-50. Then we line up the representations to a sequence by their positions. The processing of the input word sequence is much simpler. We follow the similar preprocessing procedures in the previous work [4, 11, 24]. We apply WordPiece [34, 45] and masking to the word sequence and embed them with an embedding layer, following BERT [6].


\
We integrate the image embeddings 𝑒𝑖 and the word embeddings 𝑒𝑡 into the cross-modal embedding sequence 𝑒 = {𝑒𝑖, 𝑒𝑡 }. We send the sequence to the transformer backbone for high-level feature extraction. To differ their representations, we add corresponding segment embeddings for different modalities. Specifically, we leverage the
self-attention-based transformer blocks for our unified cross-modal representation learning. To be more specific, the building block is identical to that of BERT or GPT, which consists of self attention and point-wise feed-forward network (FFN). On top of the transformer backbone, we add an output layer for word prediction, and thus we tie its weights to those of the embedding layer.
In the unified framework, we use different masking strategies to enable encoding and decoding. The input is segmented into three parts, including visual inputs, masked linguistic inputs, and complete linguistic inputs. We apply bidirectional masking to both the visual inputs and masked linguistic inputs, and we apply causal masking to the complete linguistic inputs. Thus the model is allowed to encode and decode in the same framework.
\n
We pretrain the model with the multitask setup, including text-to-text transfer, image-to-text transfer, and multimodality-to-text transfer. Thus the model can process information of different modalities and perform both single-modal and cross-modal understanding and generation.
Text-to-text Transfer As demonstrated in Figure 3, the model learns to perform text denoising and language modeling in the setting of text-to-text transfer. In text denoising, we mask the input text by a proportion, which is 15% in practice following BERT [6]. Specifically, we mask a continuous span of text with a single mask, and the model should learn to decode the whole sequence. This encourages the model to learn both recovering and length predict-ing. Besides, in order to improve the model ability in generation, we add a setup of language modeling, where the encoder receives no inputs and the decoder learns to generate words based on the previous context.


\ Image-to-text transfer Image-to-text transfer is similar to image captioning, where the model receives the visual information as the input, and learns to generate a corresponding description. In this setting, we add the aforementioned patch feature sequence to the input and leave the masked input blank. The model encodes the patch features, and decodes the corresponding text.
Multimodality-to-text transfer Based on the setup of image-to-text transfer, we additionally add masked linguistic inputs, and thus the model should learn to generate the target text based on both the visual information and the noised linguistic information. This task allows the model to adapt to the downstream tasks with both visual and linguistic inputs.
We scale up the model size to 10 billion parameters and 100 billion parameters, which are named M6-10B and M6-100B. The increase in model size provides a much larger capacity for the model that it can learn knowledge from more data. For the construction of M6-10B, we simply scale up the model by hyperparameter tuning.
To be more specific, we increase the size of hidden states and the number of layers. To better leverage GPU memory, we apply mixed-precision training and activation checkpointing to save memory. Still, the model cannot be fit into one single GPU, and thus we use model parallelism to split the feed-forward networks and attention heads to multiple GPUs following the implementation of Megatron-LM [36].
However, directly scaling up to M6-100B is much more difficult as there are more challenges for the computation resources. Alternatively, inspired by the recent progress in sparse activations [10, 20, 35], we combine Mixture-of-Experts (MoE) with M6 to build the version of 100 billion parameters. Note that the original MoE requires mesh-tensorflow as well as TPUs. This sets limits for a number of researchers without such resources. Thus we implement the M6-100B with MoE with our in-house framework Whale [43] to perform model parallelism with GPUs. We demonstrate the key statistics of the models of different scales in Table 4.
Specifically, different from the conventional FFN layer, the MoE layer is a parallel combination of multiple FFN layers, each of which acts as an expert. This is also called expert parallelism. The model first learns a sparse gating network to route the tokens to specific experts. Thus each token is only sent to a small set of experts and the computation can be much less compared with that in dense models. This kind of model is highly efficient as it realizes data parallelism and expert parallelism across workers. The computation of MoE layer for a specific token 𝑥 can be described as below:
\

where 𝑔(·) refers to the sparse gating function, and T refers to the indices of top-𝑘 values of 𝑔(·). The output of MoE is a linear combination of the computation of selected expert FFNs 𝑓 (·).
In expert parallelism, the parameters of experts do not share across workers, while those of other parts are identical across workers. Therefore, it is necessary to perform all-to-all communication across workers at the MoE layers in order to dispatch tokens to selected experts and combine them to their original experts. While Lepikhin et al. [20] and Fedus et al. [10] implement the MoE on TPUs with one expert in each MoE layer on a TPU, we implement our model on Nvidia GPUs where there are several experts in each MoE layer on a GPU so as to fully utilize the memory. As all-to-all communication takes up a large amount of time, the optimization to improve efficiency is highly significant. We implement a series of optimization, including half-precision communication. A key problem is load balancing, which denotes that tokens can gather to only a few experts due to dynamic routing. Following Fedus et al. [10], we apply expert capacity, which refers to the number of tokens for an expert (𝐶 = 𝑁 - 𝑐/m, where 𝐶 refers to expert capacity, 𝑁 refers to the number of tokens in a batch, 𝑐 refers to capacity factor (which is a hyperparameter usually larger than 1.0) and 𝑚 refers to the number of experts), to alleviate this problem. Tokens out of the capacity of an expert are dropped from the computation and they are sent to next layers through residual connections. We find that the overloading problem can be severe, and this issue can be a significant one in the future research of expert models.
Besides the optimization in all-to-all communication, we com-pare the top-2 gating and top-1 gating and find that they can achieve similar model performance in perplexity, while the latter converges slightly slower. The effectiveness of top-1 gating enables faster computation. Besides, we also apply methods of memory optimization for higher efficiency. We find that gradient clipping globally can increase costs on all-to-all communication as it computes norms across all experts, and thus we apply local clipping for memory saving. We implement M6-100B with around 100 billion parameters on 128 Nvidia A100s and the speed of pretraining achieves 1440 samples/s (for samples of the sequence length of 272).
We demonstrate that using MoE structure for model size scaling is effective and it can achieve similar performance to that of M6-10B, the largest dense model, within 2-3 times shorter time. The negative log perplexity of M6-100B reaches −2.297, in comparison with M6-10B that reaches −2.253 but with twice of time.2 This shows that the MoE-based M6 model has advantages on the time basis compared with dense models with many more FLOPs.
\
Text-to-image generation has been an open problem for a long time. Previous studies mainly focused on generation on a limited domain, among which Generative Adversarial Nets (GANs) [14, 48] are dominated methods. Following Ramesh et al. [32], we leverage a two-stage framework for text-to-image generation, including discrete representation learning and language modeling.

\ In the first stage, we focus on transforming images into sequences of discrete codes. There are a number of alternatives for discrete code generation, including VQVAE [41] and VQGAN [9]. In the second stage, it is necessary to build a language model to learn to generate text and code sequence. In the finetuning, we add code embedding and output layers to the pretrained M6. We concat the word sequence and the aforementioned generated code sequence as the input, and we set the objective of autoregressive language modeling for the training. At the stage of inference, we input the text sequence, and the model generates codes autoregressively with top-k sampling. The last step is to transform the code sequence to an image with the generator from the first stage.
We construct a dataset for text-to-image generation in E-commerce. Specifically, we collect over 50 million product titles and images from the mobile Taobao. We apply a series of processing methods on the images to filter the unqualified. We filter the images with complex background features (characters, patterns, etc.) with the in-house white-background image detector and OCR model. We then filter the images with over 3 objects with our in-house object detector based on Faster R-CNN [33]. We finally obtain 1.8m high-quality product image-text pairs for finetuning. Compared with the images in the general domains, our collected data have the following features. The image and text are highly correlated as the text describes key features of the product, and there is no complex background in the images, which is easier to learn compared with the images in the public datasets such as MSCOCO [26].
We demonstrate two examples in Figure 4 and Figure 5. It can be found that the generated images have high quality and the generated objects resemble the real ones. Furthermore, in Figure 6 , we find that the model is able to imagine items according to the query military style camouflage high heels(军旅风迷彩高跟鞋), which do not exist in the real world. The imagination ability provides room for creative design in real-world industrial scenarios, such as clothing design, shoe design, etc.
We also finetune M6 under our proposed framework on another dataset which contains 3 million images crawled from the Internet, which cover more general domains. And we find that the model can adapt to different domains. As shown in Figure 7, the model is able to generate clip arts of robots . This reveals the versatility of the framework in text-to-image generation.
We demonstrate our experimental results on a visual question answering dataset, and we illustrate how we directly apply the pre-trained M6 to the VQA application.


\ We leverage the FMIQA dataset [13] as the Chinese visual QA benchmark, which requires the model to generate the answer given an image and a question. We implement a transformer-based model as our baseline. For the evaluation, we split the test set manually by random sampling 200 from the dataset as there is no official release of the test set, and we evaluate the overall accuracy by human evaluation. The results are demonstrated in Table 5. The pretrained M6-base outperforms the baseline by a large margin (+6.2%), which indicates the effectiveness of multimodal pretraining. Scaling up the model to M6-10B further brings 5.2% improvement.
Furthermore, we show that simply finetuning on such a small VQA dataset may limit the potential of M6. Therefore, we directly leverage M6 for the VQA application. We find that the model is able to recognize general features and provide more related knowledge based on its understanding. Though the model pretrained on pseudo-parallel image-text pairs cannot directly answer questions about detailed features, such as color, number, etc., it is able to answer questions related to background knowledge. We demonstrate some examples in Figure 8.
\
Image captioning requires the model to generate a caption that describes the given image, which examines the model ability of cross-modal generation. We construct a dataset (named E-Commerce IC) containing pairs of product descriptions and product images from Taobao. Since too long or too short descriptions may be noisy, we discard pairs with a description longer than 100 words or less than 10 words. To avoid dirty generations, we further use an in-house tool to filter descriptions that may contain dirty words (i.e., pornographic or violent words). Finally, E-Commerce IC contains about 260k text-image pairs. We finetune the model with the image-to-text transfer task on E-Commerce IC.


\ We compare our model with a baseline of transformer in the human evaluation. We ask several annotators with the linguistic background to evaluate from three perspectives: grammar (whether a text is fluent without grammatical error), correctness (whether a text is faithful to the image), richness (whether a text is informative and attractive). During the evaluation, we randomly sample 100 images from the test set. For each image, an annotator is asked to score the text generated by different models. The scores are within the range of [0, 5].
The results in Table 6 show that M6-base outperforms the baseline in all of the metrics. We find that all models achieve high scores in grammar. However, in both correctness and richness, M6-base outperforms the baseline model by a large margin (+18.2% and +14.4%), indicating that multimodal pretraining helps to generate more faithful, informative and attractive texts. Scaling up the model to M6-10B further improves the correctness and richness (about 14.7% and 7.0%). Figure 9 illustrates two examples of image caption.


\n
To demonstrate the potential availability in the applications of intelligent chatbots, we further employ the M6 model to generate long answers in the style of forum discussion. Human-generated questions are collected from various Chinese forums, which are input to the model to generate the answer. At the stage of inference, we append a question mark and a token “Answer:” in the prompt, which better triggers the model to generate an answer. To facilitate the generation of longer and more informative texts, we pick more complex questions.
Figure 10 demonstrates an example of general question answer-ing. The model can illustrate a man’s own experiences that are related to the question and also point out the answer at the end. This generated text confused human annotators and passed the Turing Test. It shows that the model can not only answer general questions but also generate long fluency text.
\ \
We apply the pretrained model to Chinese poem generation. The model is able to generate genres with format constraints.

\

\ Ancient Chinese poetry has various specific formats. We adopt the simplest constraints that
Text generation under format constraint is done in a search framework that we generate short sentences ending with punctuation until the number of words meets the constraint. We repeat this process until the model generates an "
\
We evaluate the model’s ability in cross-modal retrieval. Specifically, we construct a dataset (named E-Commerce ITM) containing pairs of texts and images from the mobile Taobao. Each pair belongs to a single item. we collect 235K products in the clothing industry from Taobao. For each product, aside from the product image, we obtain a query by rewriting the product title. Specifically, we conduct named entity recognition on the title using an in-house tool, which extracts the terms describing the style, color, category and texture of the product.


\ These terms are then concatenated into a natural language query, which is used in image-text matching. The length of each query is between 6 to 12 words. The pairs of the query and corresponding product image are labeled as positive samples. The negative samples are constructed by randomly substituting the query in the original pairs.
We require the model to perform binary classification to discriminate positive and negative samples. We compare our model with InterBert [25], which is also a Chinese multi-modal pretrained model effective in cross-modal classification downstream tasks. The InterBert utilizes object-based features and has been pretrained on Taobao product image-text data as well.
The results are shown in Table 7. It should be noted that the InterBert and M6-base are both implemented with transformer-based architecture and have similar model scales. However, M6-base still outperforms InterBert by 10.3%. In experiments, we find the product images generally contain relatively fewer detected objects, which may harm the performance on this task. In contrast, M6 avoids this problem by employing the patch features and achieves much better performance.
\n
The tremendous success of NLP pretraining, including BERT [6], GPT [2, 30, 31], and also some other related studies [1, 7, 19, 27, 49], inspires the research in cross-modal representation learning. Also, recent studies show that the ubiquitous Transformer architecture [42] can be extended to different fields, including computer vision [3, 8]. Therefore, the simplest solution to incorporate recent pretraining methods and cross-modal representation learning is the extension of BERT. From the perspective of architecture, there are mainly two types, including single-stream model and dual stream model. Specifically, single-stream model is simple and it gradually becomes the mainstream architecture. These models mostly differ in their designs of pretraining tasks or the construction of input im-age features. Basically, they are mainly pretrained masked language modeling, masked object classification, and image-text matching. VisualBERT [23] and Unicoder-VL [22] simply use BERT and are pretrained with the aforementioned tasks. UNITER [4] pretrains the model with an additional task of word-region alignment. Oscar [24] enhances the alignment between objects and their corresponding words or phrases. VILLA [11] further improves model performance by adding their proposed adversarial learning methods to pretraining and finetuning. Except for pretraining tasks, some studies focus on the features of images. Most pretraining methods for multimodal representation learning utilize the features generated by a trained object detector, say Faster R-CNN [33]. PixelBERT [17] accepts raw images as input and extract their latent representations with a learnable ResNet [15] or ResNext [46]. FashionBERT [12] splits the images into patches with a trained ResNet without co-training. Besides single-stream models, dual-stream models also can achieve outstanding performance, such as VilBERT [28], LXMERT [40] and InterBERT [25]. ViLBERT-MT [29] enhances model performance with multi-task finetuning. ERNIE-ViL [50] enhances the model with the application of scene graph information. In spite of these successful cases, it still requires further researches to unmask the success of multimodal pretraining.
\
In this work, we propose the largest dataset M6-Corpus for pre-training in Chinese, which consists of over 1.9TB images and 292GB texts. The dataset has large coverage over domains, including encyclopedia, question answering, forum discussion, common crawl, etc. We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation. The model is scaled to large model with 10B and 100B parameters with sophisticated deployment, and both models are the largest multimodal pretrained models. We apply the model to a series of downstream applications, showing its versatility. More specifically, we design a downstream task of text-guided image generation, and the finetuned M6 can reach superior performance by producing images of high quality.
In the future, we will continue the pretraining of extremely large models by increasing the scale of data and models to explore the limit of performance, and we also endeavor to search for more downstream applications for further generalization.
\n
[1] Hangbo Bao, Li Dong, Furu Wei, Wenhui Wang, Nan Yang, Xiaodong Liu, Yu Wang, Jianfeng Gao, Songhao Piao, Ming Zhou, et al. 2020. Unilmv2: Pseudo-masked language models for unified language model pre-training. In International Conference on Machine Learning. PMLR, 642–652.
[2] Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan,
Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. arXiv preprint arXiv:2005.14165 (2020).
[3] Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov, and Sergey Zagoruyko. 2020. End-to-end object detection with transformers. In European Conference on Computer Vision. Springer, 213–229.
[4] Y en-Chun Chen, Linjie Li, Licheng Yu, Ahmed El Kholy, Faisal Ahmed, Zhe Gan, Yu Cheng, and Jingjing Liu. 2020. UNITER: UNiversal Image-TExt Representation Learning. In ECCV 2020. 104–120.
[5] Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, and Guoping Hu. 2020. Revisiting pre-trained models for chinese natural language processing. arXiv preprint arXiv:2004.13922 (2020).
[6] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL-HLT 2019. 4171–4186.
[7] Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified Language Model Pre-training for Natural Language Understanding and Generation. In NeurIPS 2019. 13042–13054.
[8] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xi-aohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020).
[9] Patrick Esser, Robin Rombach, and Björn Ommer. 2020. Taming Transformers for High-Resolution Image Synthesis. arXiv:2012.09841 [cs.CV]
[10] William Fedus, Barret Zoph, and Noam Shazeer. 2021. Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity. CoRR abs/2101.03961 (2021). arXiv:2101.03961 https://arxiv.org/abs/2101.03961
[11] Zhe Gan, Yen-Chun Chen, Linjie Li, Chen Zhu, Yu Cheng, and Jingjing Liu. 2020. Large-Scale Adversarial Training for Vision-and-Language Representation Learning. In NeurIPS 2020.
[12] Dehong Gao, Linbo Jin, Ben Chen, Minghui Qiu, Peng Li, Yi Wei, Yi Hu, and Hao Wang. 2020. Fashionbert: Text and image matching with adaptive loss for cross-modal retrieval. In SIGIR 2020. 2251–2260.
[13] Haoyuan Gao, Junhua Mao, Jie Zhou, Zhiheng Huang, Lei Wang, and Wei Xu. 2015. Are you talking to a machine? dataset and methods for multilingual image question answering. arXiv preprint arXiv:1505.05612 (2015).
[14] Ian J Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial networks. arXiv preprint arXiv:1406.2661 (2014).
[15] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In CVPR 2016. 770–778.
[16] Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. De-berta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654 (2020).
[17] Zhicheng Huang, Zhaoyang Zeng, Bei Liu, Dongmei Fu, and Jianlong Fu. 2020. Pixel-bert: Aligning image pixels with text by deep multi-modal transformers. arXiv preprint arXiv:2004.00849 (2020).
[18] Zihang Jiang, Weihao Yu, Daquan Zhou, Yunpeng Chen, Jiashi Feng, and Shuicheng Yan. 2020. Convbert: Improving bert with span-based dynamic convolution. arXiv preprint arXiv:2008.02496 (2020).
[19] Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. CoRR abs/1909.11942 (2019).
[20] Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. 2020. Gshard: Scaling giant models with conditional computation and automatic sharding. arXiv preprint arXiv:2006.16668 (2020).
[21] Mike Lewis, Shruti Bhosale, Tim Dettmers, Naman Goyal, and Luke Zettle-moyer. 2021. BASE Layers: Simplifying Training of Large, Sparse Models. CoRR abs/2103.16716 (2021).
[22] Gen Li, Nan Duan, Yuejian Fang, Daxin Jiang, and Ming Zhou. 2019. Unicoder-VL: A Universal Encoder for Vision and Language by Cross-modal Pre-training. CoRR abs/1908.06066 (2019).
[23] Liunian Harold Li, Mark Yatskar, Da Yin, Cho-Jui Hsieh, and Kai-Wei Chang. 2019. VisualBERT: A Simple and Performant Baseline for Vision and Language. CoRR abs/1908.03557 (2019).
[24] Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, Yejin Choi, and Jianfeng Gao. 2020. Oscar: Object-Semantics Aligned Pre-training for Vision-Language Tasks. CoRR abs/2004.06165 (2020).
[25] Junyang Lin, An Yang, Yichang Zhang, Jie Liu, Jingren Zhou, and Hongxia Yang. 2020. Interbert: Vision-and-language interaction for multi-modal pretraining. arXiv preprint arXiv:2003.13198 (2020).
[26] Tsung-Yi Lin, Michael Maire, Serge J. Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. 2014. Microsoft COCO: Common Objects in Context. In ECCV 2014. 740–755.
[27] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretraining Approach. CoRR abs/1907.11692 (2019).
[28] Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee. 2019. ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks. In NeurIPS 2019. 13–23.
[29] Jiasen Lu, Vedanuj Goswami, Marcus Rohrbach, Devi Parikh, and Stefan Lee. 2019. 12-in-1: Multi-Task Vision and Language Representation Learning. CoRR abs/1912.02315 (2019).
[30] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3-us-west-2.amazonaws.com/openai-assets/researchcovers/ languageunsupervised/language understanding paper. pdf (2018).
[31] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. [n.d.]. Language models are unsupervised multitask learners. ([n. d.]).
[32] Aditya Ramesh, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. 2021. Zero-Shot Text-to-Image Generation. arXiv:2102.12092 [cs.CV]
[33] Shaoqing Ren, Kaiming He, Ross B. Girshick, and Jian Sun. 2015. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. In NeurIPS 2015. 91–99.
[34] Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural Machine Translation of Rare Words with Subword Units. In ACL 2016.
[35] Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. 2017. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. arXiv preprint arXiv:1701.06538 (2017).
[36] Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. 2019. Megatron-LM: Training Multi-Billion Parameter Language Models Using Model Parallelism. arXiv preprint arXiv:1909.08053 (2019).
[37] Kaitao Song, Xu Tan, Tao Qin, Jianfeng Lu, and Tie-Yan Liu. 2019. MASS: Masked Sequence to Sequence Pre-training for Language Generation. In ICML 2019. 5926–5936.
[38] Weijie Su, Xizhou Zhu, Yue Cao, Bin Li, Lewei Lu, Furu Wei, and Jifeng Dai. 2020. VL-BERT: Pre-training of Generic Visual-Linguistic Representations. In ICLR 2020.
[39] Maosong Sun, Jingyang Li, Zhipeng Guo, Z Yu, Y Zheng, X Si, and Z Liu. 2016. Thuctc: an efficient chinese text classifier. GitHub Repository (2016).
[40] Hao Tan and Mohit Bansal. 2019. LXMERT: Learning Cross-Modality Encoder Representations from Transformers. In EMNLP-IJCNLP 2019. 5099–5110.
[41] Aäron van den Oord, Oriol Vinyals, and Koray Kavukcuoglu. 2017. Neural Discrete Representation Learning. In NIPS.
[42] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In NeurIPS 2017. 5998–6008.
[43] Ang Wang, Xianyan Jia, Le Jiang, Jie Zhang, Yong Li, and Wei Lin. 2020. Whale: A Unified Distributed Training Framework. arXiv preprint arXiv:2011.09208 (2020).
[44] Wei Wang, Bin Bi, Ming Yan, Chen Wu, Zuyi Bao, Jiangnan Xia, Liwei Peng, and Luo Si. 2019. Structbert: Incorporating language structures into pre-training for deep language understanding. arXiv preprint arXiv:1908.04577 (2019).
[45] Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google’s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144 (2016).
[46] Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, and Kaiming He. 2017. Aggregated residual transformations for deep neural networks. In CVPR 2017. 1492–1500.
[47] Liang Xu, Xuanwei Zhang, and Qianqian Dong. 2020. CLUECorpus2020: A Large-scale Chinese Corpus for Pre-trainingLanguage Model. arXiv preprint arXiv:2003.01355 (2020).
[48] Tao Xu, Pengchuan Zhang, Qiuyuan Huang, Han Zhang, Zhe Gan, Xiaolei Huang, and Xiaodong He. 2018. Attngan: Fine-grained text to image generation with attentional generative adversarial networks. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1316–1324.
[49] Zhilin Yang, Zihang Dai, Yiming Yang, Jaime G. Carbonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. XLNet: Generalized Autoregressive Pretraining for Language Understanding. In NeurIPS 2019. 5754–5764.
[50] Fei Yu, Jiji Tang, Weichong Yin, Yu Sun, Hao Tian, Hua Wu, and Haifeng Wang. 2020. Ernie-vil: Knowledge enhanced vision-language representations through scene graph. arXiv preprint arXiv:2006.16934 (2020).
[51] Luowei Zhou, Hamid Palangi, Lei Zhang, Houdong Hu, Jason J. Corso, and Jianfeng Gao. 2020. Unified Vision-Language Pre-Training for Image Captioning and VQA. In AAAI 2020. 13041–13049.
\
:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.
:::
\