Dataset Viewer
Auto-converted to Parquet Duplicate
shuffled_text
stringlengths
267
8.32k
A
stringclasses
6 values
B
stringclasses
6 values
C
stringclasses
6 values
D
stringclasses
6 values
label
stringclasses
4 values
**A**: In this paper, we have adopted a spectral approach to GoM analysis of multivariate binary responses**B**: Under the notion of expectation identifiability, we have proposed sufficient conditions that are close to being necessary for GoM models to be identifiable**C**: For estimation, we have proposed an efficient SVD-based spectral algorithm to estimate the subject-level and population-level parameters in the GoM model. Our spectral method has a huge computational advantage over Bayesian or likelihood-based methods and is scalable to large-scale and high-dimensional data.
CBA
ABC
BCA
ACB
Selection 2
**A**: Therefore the average number of inputs consumed by Algorithm 1 is close to the optimum. Moreover, even if the number of required inputs in a given realization of the algorithm can potentially be much larger than its average value, the probability that this happens is very small thanks to the exponential-bound property.**B**: Indeed, [11, theorem 6] establishes that any algorithm that outputs a Bernoulli random variable with parameter τ𝜏\tauitalic_τ from inputs with parameter 1/2121/21 / 2 must use at least 2222 inputs on average, except when τ𝜏\tauitalic_τ is a dyadic number**C**: Thus, according to this theorem, the number of inputs used by the algorithm has an exponentially bounded tail and its average value is very small
CAB
ABC
CBA
BAC
Selection 3
**A**: We note that this challenge is faced by all ridge estimation algorithms, due to the fact that ridges are local features which may arise in any low-estimated-density regions as long as the density is positive, even when true ridges do not exist in these regions. A second challenge is possible local (but non-global) modes of our ridgeness function η𝜂\etaitalic_η, which again might lead to spurious ridge points**B**: This challenge is relative easy to handle, because the global maximum of η𝜂\etaitalic_η is known, which is zero. This known maximum provides a way to distinguish between local and global modes of η𝜂\etaitalic_η. We address these challenges by introducing the following pre-processing and post-processing steps. **C**: In practice our algorithms can encounter the following two challenges. The first is posed by low density regions, where the estimated density tends to be flat leading to possible spurious ridge points identified by the algorithms
CBA
ABC
BCA
CAB
Selection 3
**A**: Moreover, a k-nearest neighbor (kNN) regression is applied in order to construct the surrogate function**B**: In this section, we compare different algorithms discussed in Section 4. It is important to remark that all the techniques are always compared with the same number of evaluations (denoted as E𝐸Eitalic_E) of the noisy target pdf**C**: Recall that the baseline PM-MH algorithm is not using a surrogate model (see Algorithm 1).
CBA
ABC
BCA
BAC
Selection 4
**A**: its variance and skewness. Appendix A extends our results accordingly. This work appears to provide the first nonlinear, nonparametric estimators for long term dose response curves and counterfactual distributions.**B**: Definition 2.1 generalizes from long term dose response curves to long term counterfactual distributions by replacing the 𝔼𝔼\mathbb{E}blackboard_E symbols with ℙℙ\mathbb{P}blackboard_P symbols**C**: Long term counterfactual distributions capture aspects of the long term reward distribution beyond its mean, e.g
CBA
ACB
BCA
CAB
Selection 4
**A**: Furthermore, quite notably the performance of SAA can be significantly improved upon for the ski-rental problem, even under the Kolmogorov distance**B**: We discuss in details the derivation of alternative policies for ski-rental in Appendix E.**C**: We note that related arguments allow us to design alternative policies for the ski-rental with Wasserstein distance for which SAA performs poorly
ABC
BCA
CBA
BAC
Selection 2
**A**: Especially adding a cohort effect seemed to often be beneficial. Interestingly, adding a period effect when modeling the claim amount did often not lead to any improvement. This is possibly due to the fact that inflation is often adjusted for before being aggregated into run-off triangles.**B**: In the data sets considered, including additional effects such as calendar effects and cohort effects often improved the fit**C**: The simplicity of the model furthermore invites very naturally to consider model extensions beyond the simple age model
BCA
BAC
ABC
CBA
Selection 4
**A**: The source population is different from the target population and source samples may not be representative of the target population**B**: The goal is to transport the causal effects from the source population to the target population. This setup arises widely in public policy research, where randomized trial or observational study is conducted in selected samples while the target samples are from administrative databases or surveys and can be very different from the samples enrolled in the study (our data example in Section 8 falls into this category, as will be discussed in more detail shortly). **C**: In contrast, in the transportation setting, the source population is (at least partly) external to the target population (Cole and Stuart,, 2010)
BAC
CBA
CAB
BCA
Selection 4
**A**: Indeed, we present an elementary Lemma showing that the resulting model learns a segmentized output function spanned by the chosen basis, meaning that spanning coefficients depend on the values of the remaining fields**B**: This is, of course, an essential property for recommendation systems, since indeed users with different demographic or contextual properties may behave differently**C**: For a pair of fields, the segmentized functions are spanned by a tensor product of the chosen pair of bases, where the coefficients are learned in factorized form by the model.
CAB
CBA
CBA
ABC
Selection 4
**A**: Estimation of the observation matrix through importance sampling thus renders GFE optimisation a stochastic procedure. As a result, the GFE may fluctuate over iterations. For policy selection, we therefore average the GFE over iterations, after a short burn-in period (ten iterations in this case).**B**: Therefore we pass the log-message as a function directly and use importance sampling to evaluate expectations of q⁢(𝐀)𝑞𝐀q(\bm{\mathrm{A}})italic_q ( bold_A ) (Akbayrak et al., 2021)**C**: 3 does not express a (scaled) standard distribution type as a function of 𝐀𝐀\bm{\mathrm{A}}bold_A
ACB
CAB
CBA
ACB
Selection 3
**A**: We also observe from Table 1 that the R2superscript𝑅2R^{2}italic_R start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT, MSE, MAE, and max error of MNN are far better than that of modified USVT. Specifically the MSE of MNN is > 28x better compared to modified USVT. That is, MNN works significantly better on MNAR data. **B**: Results. As we can see from Fig. 4, the estimates from modified USVT are extremely biased**C**: The estimates from MNN, however, appear to be minimally biased and inline with ground truth. Moreover, from Fig. 3, we can see that the estimates made by modified USVT are very sensitive to outliers in the data, while the estimates from MNN are not
CAB
BCA
ACB
BAC
Selection 1
**A**: Moreover, we extended (Jasour et al., 2021) to include exponential updates. We present the new methodological material in Sec. 5.**B**: (Jasour et al., 2021) developed a method that obtains the exact time evolution of the moments of random states for a class of dynamical systems that depend on trigonometric updates. We amended their approach and make it compatible with the Polar tool (Moosbrugger et al., 2022)**C**: Specifically, we incorporated the approach of (Jasour et al., 2021) into Prob-solvable loops when updates involve trigonometric functions. This allows us to automatically compute the exact moments of any order and at all iterations
ABC
ACB
CAB
BAC
Selection 3
**A**: The result is evaluated in Section V. Finally, in Section VI, the conclusion and future work are presented. **B**: The objective function is discussed in Section III. The experimental setup is described in Section IV**C**: This paper is structured as follows. Some preliminary information is provided in Section II
CBA
ABC
ACB
BCA
Selection 1
**A**: (2018) wherein ML is used to learn nuisance functions with ex ante unknown functional forms, and the predicted values of these functions used to construct (orthogonalized) scores for the interest parameters from which consistent and asymptotically normal estimators can be obtained. DML is a very general estimation framework but there are limited examples of its application to panel data, notable examples of which include Chang (2020), Klosin and**B**: Athey, 2018) and Generalised Random Forests by (Athey et al., 2019)**C**: However, the key development, as far as this paper is concerned, is Double/Debiased Machine Learning (DML) by Chernozhukov et al
CAB
BAC
BCA
BCA
Selection 1
**A**: Interestingly, ECBM significantly outperforms other methods in terms of overall concept accuracy, especially in CUB (71.3%percent71.371.3\%71.3 % for ECBM versus 39.6%percent39.639.6\%39.6 % for the best baseline CEM); this shows that ECBM successfully captures the interaction (and correlation) among the concepts, thereby leveraging one correctly predicted concept to help correct other concepts’ prediction. Such an advantage also helps improve ECBM’s class accuracy upon other methods.**B**: Concept accuracy across various methods is similar, with our ECBM slightly outperforming others**C**: Concept and Class Label Prediction. Table 1 shows different types of accuracy of the evaluated methods
ACB
ABC
BAC
CBA
Selection 4
**A**: This work presents two new sequential design strategies to build efficient Gaussian process surrogate models in Bayesian inverse problems. These strategies are especially important for cases where the posterior distribution in the inverse problem has thin support or is high-dimensional, in which case space-filling designs are not as competitive**B**: The IP-SUR strategy introduced in this work is shown to be tractable and is supported by a theoretical guarantee of almost sure convergence of the weighted integrated mean square prediction error to zero. This method is compared to a simpler CSQ strategy which is adapted from D-optimal designs and to a strategy based on the minimization of the Bayes risk with respect to the variance of the likelihood estimate**C**: While both methods perform better than D-optimal and I-optimal strategies, the IP-SUR method seems to provide better performance than CSQ for higher dimensions while not relying on the choice of a hyperparameter, all the while being grounded on strong theoretical foundations. It is also comparable to the Bayes risk minimization for all test cases and even superior for the bimodal test case. The latter strategy also does not display a convergence guarantee.
BAC
CBA
ABC
ACB
Selection 3
**A**: In networks, additional effects can be inferred by the logic that an ancestor of an ancestor must be an ancestor**B**: In our simulation, we see that this can help to find almost all ancestors without errors even when some connections are individually hard to find at a given sample size. When incorporating unobserved time series, which is a violation of our assumptions, error control at a predefined level does not work anymore. However, the ordering of the test statistics still provides some indication of what could be true ancestors.**C**: We also obtain asymptotic power up to few pathological cases
BCA
BAC
CBA
CAB
Selection 1
**A**: A vast majority of unlearning algorithms (Triantafillou et al**B**: This presents a major challenge to deploy machine unlearning algorithms for tackling label noise.**C**: 2024) require knowledge of which samples are mislabeled to partition the data into the retain set and the forget set. It is challenging to distinguish between mislabeled samples and hard to learn samples (Garg, Ravikumar, and Roy 2024)
CBA
ACB
CBA
BCA
Selection 2
**A**: However, their error bound has linear dependence on the ambient dimension d𝑑ditalic_d and exponential dependence on the diameter of the low-dimensional manifold. Another line of works (Chen et al., 2023b, ; Tang and Yang,, 2024; Oko et al.,, 2023) focused mainly on score estimation with properly chosen neural networks that exploit the low-dimensional structure, which is also different from our main focus. **B**: Empirical evidence suggests that the distributions of natural images are concentrated on or near low-dimensional manifolds within the higher-dimensional space in which they formally reside (Simoncelli and Olshausen,, 2001; Pope et al.,, 2021). In view of this, a reasonable conjecture is that the convergence rate of the DDPM sampler actually depends on the intrinsic dimension rather than the ambient dimension**C**: However, the theoretical understanding of diffusion models when the support of the target data distribution has a low-dimensional structure remains vastly under-explored. As some recent attempts, De Bortoli, (2022) established the first convergence guarantee under the Wasserstein-1 metric
CAB
ABC
BCA
BCA
Selection 1
**A**: Expanding on the foundation laid by Csmc,  Moretti et al**B**: Vcsmc employs Csmc as a means to create an unbiased estimator for the marginal likelihood:**C**: (2021) introduces Variational Combinatorial Sequential Monte Carlo (Vcsmc) as an approach to learn distributions over phylogenetic trees
ACB
CAB
ABC
CAB
Selection 1
**A**: [31] explored zero-inflated and hurdle models to better capture the inherent sparsity in social and biological networks. Furthermore, Dong et al. [15] and Motalebi et al. [32] specifically focused on adapting stochastic block models to account for excess zeroes, underscoring the importance of accurately modelling sparsity for realistic network analysis.**B**: Similarly, Ebrahimi et al**C**: [16] and Motalebi et al
ACB
CBA
CBA
CAB
Selection 4
**A**: This paper is organised as follows. In Section 2, we estimate the second-order of the large-deviation probabilities of the rare event that a sparse Erdős–Rényi random graph has a linear number of vertices in triangles, study the structure of the graph conditionally on this rare event, and provide proofs for our main results**B**: In Section 4, we show how our main results can be used to consistently estimate the exponential random graph parameters. We close in Section 5 with a discussion and a list of open problems.**C**: In Section 3, we use these results, as well as the key insights developed in their proofs, to study exponential random graphs based on the number of vertices in triangles. We show that, for appropriate parameter choices, such models are sparse, i.e., lead to sparse exponential random graphs
BCA
ACB
CBA
BAC
Selection 2
**A**: Finally, emerging work on flow matching models [36, 37, 38, 39, 40, 41, 42] has achieved impressive generative performance on several benchmark image datasets**B**: These are closely related to the probability flow ODE (pfODE) view of DBMs, and, in fact, have been shown to be equivalent to such models for specific choices of “interpolant” functions and conditional distributions. Despite their exceptional generative performance and deterministic nature, existing flow matching approaches do not allow for compression and, therefore, do not allow practitioners to infer a lower dimensional latent space from data. **C**: Such models utilize simple conditional distribution families to learn a vector field capable of transporting points between two pre-specified densities
ABC
CAB
ACB
CAB
Selection 3
**A**: To understand this better, let’s first take a look at the Neyman-Pearson Lemma: **B**: In the context of the entire population, CTI shares a very similar formulation with the Least Ambiguous Set method used for classification, as described in (Sadinle, Lei, and Wasserman 2019)**C**: If we assume that our quantile regression model is sufficiently accurate, CTI has the potential to achieve the optimal size for prediction sets when considering the marginal distribution
BAC
BAC
CBA
CAB
Selection 4
**A**: This gap is particularly relevant in applications such as transductive conformal prediction on traffic networks. For example, existing Graph Neural Network (GNN) methods can predict the label of each road, where the label can be considered as the cost of traversing that road. This problem has been studied in (Huang et al**B**: 2024; Zargarbashi, Antonelli, and Bojchevski 2023; Zhao, Kang, and Cheng 2024). Conformalized GNN can output a prediction set of each edge’s label, but the cost of a route, which is the sum of the labels of the edges on the route, cannot be directly obtained by applying conformal prediction to individual edges. This challenge arises from two main aspects**C**: First, the sum or average of random variables involves the convolution of the density function, and simple interval arithmetic, such as adding up the lower and upper bounds, cannot provide a confidence interval with the desired coverage. Second, the coverage of each confidence interval is in a marginal sense, so the coverage of two labels from two confidence intervals are dependent events. Consequently, it is difficult to use the conformal prediction set for a single label to devise a prediction set for multiple labels. To address this critical gap in the current literature on conformal prediction and expand its applicability to a broader range of uncertainty quantification problems, we introduce the method Conformal Interval Arithmetic (CIA), specifically designed to estimate the average or other symmetric functions of unknown labels over a certain index set, demonstrating the usefulness of our problem setting in many applications where people are interested in obtaining estimates about multiple labels.
CAB
CAB
CAB
ABC
Selection 4
**A**: Embedding these data-driven linkages with quantification of epistemic uncertainty is critical to assess confidence in predictions and guide future data collection**B**: A cornerstone task in materials modeling and discovery consists in building efficient structure–property linkages from experimental or simulation data, typically expensive to obtain**C**: Herein we consider a surrogate modeling task that maps geometric and materials properties of a two-phase composite microstructure (input 𝐱𝐱\mathbf{x}bold_x) to effective properties of the representative volume element (output 𝐲𝐲\mathbf{y}bold_y). In the following we provide a brief introduction to the data generation process, summarized in Fig. 8; the reader is referred to [16] for a more detailed presentation.
CAB
ABC
BCA
BAC
Selection 4
**A**: In this paper, we propose leveraging implicit human feedback, specifically response times, to provide additional insights into preference strength. Unlike explicit feedback, response time is unobtrusive and effortless to measure [17], offering valuable information that complements binary choices [16, 2]**B**: This lack of variation in choices makes it difficult to assess how much a user likes or dislikes any specific product, limiting the system’s ability to accurately infer their preferences. Response time can help overcome this limitation. Psychological research shows an inverse relationship between response time and preference strength [17]: users who strongly prefer to skip a product tend to do so quickly, while longer response times can indicate weaker preferences. Thus, even when choices appear similar, response time can uncover subtle differences in preference strength, helping to accelerate preference learning.**C**: For instance, consider an online retailer that repeatedly presents users with a binary query, whether to purchase or skip a recommended product [35]. Since most users skip products most of the time [33], the probability of skipping becomes nearly 1 for most items
CAB
BCA
ACB
ABC
Selection 3
**A**: Moving beyond the investigated use cases with known solutions, we envisage, and encourage, further testing and applications of QRNGs in stochastic modelling, ranging from Bayesian inference, stochastic differential equations, optimisation, and Monte Carlo simulations to leverage the revealed advantages in all affected fields**B**: We expect the differences between PRNGs and QRNGs to be more pronounced for non-linear problems, such as non-linear stochastic differential equations, and especially for workloads that rely upon probability estimates rather than the leading moments of a distribution**C**: We plan to pursue a wider comparison of QRNGs, classical true random number generators (TRNGs) and a variety of PRNGs in the aforementioned fields using the methodology utilised in this paper.
ABC
CBA
BAC
BAC
Selection 1
**A**: Teacher Selection. RLHF typically aggregates preferences from multiple teachers (Hao et al., 2023; Zhong et al., 2024; Chakraborty et al., 2024)**B**: (2023); Freedman et al. (2023) formalized the teacher selection problem in RLHF, highlighting the need to query the most appropriate teacher for effective reward learning.**C**: Daniels-Koch and Freedman (2022); Barnett et al
ABC
CAB
ABC
ACB
Selection 4
**A**: However, this is not true for infinite state spaces. Hence we need some conditions that guarantee geometric ergodicity for which we refer the reader to [13, Theorem 9]. Under geometric ergodicity, the mixing time is upper-bounded by**B**: The key difference is that in geometric ergodicity, the constant N𝑁Nitalic_N depends on the initial state z𝑧zitalic_z**C**: For finite state spaces Z𝑍Zitalic_Z, all irreducible and aperiodic Markov chains are both geometrically and uniformly ergodic
BCA
CAB
ACB
ACB
Selection 2
**A**: however, we remark this choice of the sphere can be granted flexibility for other geometries, although diagonal matrices work well. This steady state term is primarily a regularizer and helps robustness**B**: The steady state altogether produces a more robust representation that leads to lower out-of-distribution error, as we see in Figure 2. We show our proposed geometric flow is also a gradient flow, which is important as we achieve stability far from singularity/triviality. The solution to the above geometric flow is of the form**C**: α𝛼\alphaitalic_α is a parameter to control the strength of the steady state across the flow, but the other term is allowed freedom and can be arbitrarily large despite the steady state tendency so that the data is learned appropriately. Indeed, it can be shown that the geometric flow without the steady state regularizer is a gradient flow that reaches a steady energy functional dissipation only at singularity
BCA
CAB
CBA
ACB
Selection 4
**A**: Batch Thompson sampling is centralized, with all agents having access to the same information**B**: However, this may not be realistic in real-world situations, where communication between agents may be constrained due to bandwidth limitations, computational constrictions, or privacy concerns**C**: In these cases, agents may only have access to the sampled points by few other agents, and thus datasets available to distinct agents may differ. We propose a distributed Thompson sampling algorithm for this constrained communication case, and provide theoretical guarantees for the algorithm.
CAB
CBA
ABC
CAB
Selection 3
**A**: Our proposed architecture is based on pre-trained Transformer models**B**: Transformer-based neural processes (Müller et al.,, 2021; Nguyen and Grover,, 2022; Chang et al.,, 2024) serve as the foundational structure for our approach, but they have not considered experimental design**C**: Decision Transformers (Chen et al.,, 2021; Zheng et al.,, 2022) can be used for sequentially designing experiments. However, we additionally amortize the predictive distribution, making the learning process more challenging.
ACB
BAC
ACB
ABC
Selection 4
**A**: Fairness in GNNs has gained substantial attention, particularly in efforts to identify and mitigate biases associated with specific sensitive features (Zhang et al**B**: 2024c). Various fairness-aware GNN studies aim to preserve the independence of sensitive features through pre-processing and in-processing techniques (Dong et al**C**: 2023; Luo et al. 2024d).
BAC
BAC
ABC
BCA
Selection 3
**A**: One of our initial assumptions was that the data had been observed at a dense grid. In the case where the data is only available at a sparse grid, further extensions would require smoothing by an appropriate basis, e.g., CB-splines (Machalová et al., 2021) specifically developed for Bayes spaces. Naturally, the next step from univariate data would be to consider multivariate densities. During multivariate functional data analysis, each observation contains the recording of several “functional" variables**B**: In this setting, not only the covariance between different time points is considered, but also the relation between individual variables. If the second dimension is also continuously observed, the observations are so-called random surfaces. As these statistical fields show increasingly practical relevance (Berrendero et al. (2011); Górecki et al**C**: (2018); Dai and Genton (2018); Masak and Panaretos (2023) and references therein), expanding RDPCA to these areas seems worthwhile. Calculating the RDMD involved regularizing by a suitable operator that smooths out unwanted noise components while keeping the relevant signal within the (uncontaminated) data unaffected. Another class of meaningful operators that utilize the functional nature of the data are differential operators often used in Tikhonov regularization. One has to keep in mind that these operations have yet to be defined for Bayes spaces. Next to PCA, the regularized Mahalanobis distance could be used for concepts like linear or quadratic discriminant analysis for the classification density data. As these methods rely on similarity measures involving several covariance operators, the robust classification of densities based on the RDMD would be suitable.
ABC
CBA
CAB
CAB
Selection 1
**A**: We assume no coalition of institutions large enough to meet or exceed the threshold T𝑇Titalic_T colludes to combine their secret key shares and decrypt data without authorization**B**: If fewer than T𝑇Titalic_T participants collude, they cannot decrypt the aggregated ciphertexts**C**: Those holding enough shares to surpass the threshold are presumed to adhere to legitimate protocol steps rather than colluding maliciously.
BCA
ABC
BCA
BCA
Selection 2
**A**: We find that FPET is reasonably well calibrated for predicting mCPR, see Table 1. For example, when predicting mCPR 3 years ahead, i.e. when producing forecasts for 2020 using the training set, the median error in mCPR among all women is -0.7%, and the median absolute error is 1.4%**B**: Examples are given for predicting mCPR in Figure 9 for Mali and Nepal. In Mali, FPET would have underpredicted mCPR, while in Nepal, there were smaller-than-expected increases. In such settings, the use of service statistics data may improve predictions past the most recent survey, as discussed in Mooney et al. [2024a]. **C**: The point predictions that would have been constructed in 2018 generally fall within the uncertainty intervals constructed using all data at nominal levels or above, see Table 2. Similarly, we find that FPET is also reasonably well calibrated for predicting unmet need for modern methods and by marital group. We note that larger absolute errors occur in settings where predictions are off due to an unforeseen stall or acceleration, or where recent data suggests an erroneous trend
ABC
BAC
BAC
ACB
Selection 4
**A**: The number of control bits in an s-MTJ device impacts both energy consumption and the precision of setting the energy bias, which in turn affects the available probabilities of obtaining bit samples**B**: Figure 2 illustrates this relationship**C**: This section evaluates the approximation error caused by imprecision in achieving a desired Bernoulli distribution.
ABC
CBA
CBA
BCA
Selection 1
**A**: [18], [17], [9], [10], [2], [3]). In this regard it is more convenient to use the measure called Normalized Strength (borrowed from complex networks terminology, see [6], [1]), and that we define here by**B**: [19]**C**: A high measure of CB means that the competition is highly interesting since it is very difficult to predict the result of a match (or a race, in our case), while a low measure of CB means that the competition is very predictable, and therefore boring (see
CAB
BCA
BAC
ABC
Selection 1
**A**: Despite the complexity of these interactions, the equations remain mathematically tractable, often enabling precise predictions of disease trends (see, for instance, the discussions of related DSA-based approaches given in [8, 24]). **B**: By transforming the SIR model using dynamical survival analysis within the edge-based configuration network framework, the resulting system of equations captures the intricate dynamics of network-based interactions**C**: The proposed model is broadly applicable to various domains, including social interactions, biological systems (e.g., neural or protein interactions), and technological networks (e.g., the spread of computer viruses or resilience of infrastructure systems)
BCA
CAB
CBA
CAB
Selection 3
**A**: However, due to differences in dataset characteristics and data analysis mission, this usage is not equivalent to ridges in the TFR**B**: In the following, we focus exclusively on ridges within TFRs. **C**: Before proceeding, it is worth noting that the term “ridge” has a long history of usage in statistics [16], image analysis [13], etc
BCA
ABC
ACB
CAB
Selection 1
**A**: NMC stands for naive nested Monte Carlo estimation, while BO stands for Bayesian optimization**B**: Figure 2: Results on two gridworld environments comparing EIG-based methods with baselines**C**: "Single st. EIG (x / 8.8)" denotes single-state EIG with the x axis scaled by 8.8 - the mean length of trajectories collected by the full-trajectory EIG variants.
CAB
ABC
ABC
BAC
Selection 4
**A**: Moreover, the core tensors derived from our method (TTM-HOSVD) in Figure 1 and Tensor-LDA in Figure 3 reveal clear interactions between clusters along all modes. In particular, these methods show the first cluster of indices along the first mode switches from topic 1 to topic 2 for as the documents to which they correspond switch from cluster 1 to cluster 2 along mode 2. In contrast, neither NTD nor hybrid-LDA shows such clear interaction patterns in the core.**B**: The results in the second mode are more mixed. Hybrid LDA, shown in Figure 4, fails to correctly recognize clusters along the second mode**C**: By contrast, Tucker-decomposition-based methods (ours in Figure 1, Non-negative Tucker Decomposition (NTD) in Figure 2 and Tensor-LDA in Figure 3) successfully recover the mode-2 clusters. However, TTM-HOSVD and NTD feature stronger membership assignments compared to Tensor-LDA
ACB
CBA
CBA
CAB
Selection 4
**A**: Together, this definition informally says that the Markov boundary is the minimal set of variables that, once known, allows us to drop all other variables without losing information about Y𝑌Yitalic_Y, and removing any variable from this set would lead to a strict loss of information about Y𝑌Yitalic_Y. When the covariates are not compositional, the Markov boundary is (under very mild conditions) equivalent to the set of important covariates defined via conditional dependence (Edwards,, 2012; Candès et al.,, 2018), which, as mentioned in the second-to-last paragraph of Section 1.1, is the basis for covariate importance throughout the literature on parametric and nonparametric methods for identifying important covariates in regression. Thus we find it to be a natural and intuitive target for variable selection with compositional covariates if we can show it remains well-defined under compositionality.**B**: Item 1 in 2.1 says that, after accounting for the covariates in the Markov boundary, all the remaining covariates provide no further information about Y𝑌Yitalic_Y**C**: Item 2 says that the Markov boundary is the minimal such set, in the sense that no subset of it has the property in item 1
BCA
CAB
BAC
ACB
Selection 2
**A**: This approach benefits from a quadratic rate of convergence when the true parameters and their estimates lie within the interior of the parameter space, conditional on the values of row vector representations during the estimation of column vector representations (and vice versa).**B**: Instead, our alternating scheme leverages the Fisher scoring algorithm, with or without learning rate adjustment, to achieve convergence**C**: In the case of normal distribution, the constant variance allows the use of ALS to estimate both the row and column representation vectors. However, the SA-Tweedie model cannot employ ALS because the variance of the Tweedie distribution is not constant
CAB
ABC
CBA
ABC
Selection 3
**A**: The key difference between SA-ZIG and factor analysis is that SA-ZIG uses a large coefficient matrix, whereas factor analysis uses a single vector. Additionally, SA-ZIG assumes a two-stage Bernoulli-Gamma model, while factor analysis assumes a normal distribution.**B**: The SA-ZIG model is inherently similar to factor analysis**C**: In factor analysis, both the loading matrix and the coefficient vector are unknown
ABC
ACB
ABC
CAB
Selection 4
**A**: (3) About gold standard. The gold standard we used is the minimum of test errors from all algorithms in a comparison. This is because it is desired to have smaller test errors and less number of informative genes used in a classification algorithm. The algorithms with the smaller SRD values are closer to the ideal performance**B**: A third choice of the gold standard could be the average performance from ten data sets. Then the SRD is a nonparametric measure of how far each algorithm is away from the average performance. With this, however, we may not be able to tell whether an algorithm has the least test error because they are compared to the average.**C**: Alternatively, one could use the maximum error and maximum number of genes as the gold standard. In that case, the algorithm whose SRD value differs most with the gold standard will be the best one. In either case (using minimum or maximum as the gold standard), we believe the conclusion will be consistent
CBA
ABC
ACB
CBA
Selection 3
**A**: Stern (1986) provides an expression for the direct utility function (it is a non-closed form function). Later the functional form (11) was used by Gruber and Saez (2002) and Blomquist and Selin (2010) to estimate taxable income functions.**B**: Sufficient conditions for the Slutsky condition to be satisfied are θ≥0𝜃0\theta\geq 0italic_θ ≥ 0 and γ≤0𝛾0\gamma\leq 0italic_γ ≤ 0. In a footnote they mention that the direct utility function can be derived from the indirect utility function, but that a closed form solution does not exist**C**: Burtless and Hausman (1978) used this functional form to estimate a labor supply function. They also derived the corresponding indirect utility function and gave necessary and sufficient conditions for this function to be consistent with utility maximization
CBA
BAC
BCA
BAC
Selection 1
**A**: However, more observation offers more redundancy, thus better robustness for the same algorithm.**B**: Recoverability. To evaluate LRMC’s robustness against outliers, we generated 20202020 problem instances with varying outlier density levels and compared its recoverability to ScaledGD**C**: Table II shows that LRMC has superior recoverability to ScaledGD against high-density outliers, despite 10%percent1010\%10 % or 100%percent100100\%100 % observation cases
BCA
ABC
ABC
CAB
Selection 4
**A**: These features, determined by the Köppen function, provide universal topological information about the input space, effectively implementing a k-nearest neighbors structure that is inherent to the representation**B**: The most striking aspect of KST is that it leads to a Generalized Additive Model (GAM) with fixed features that are independent of the target function f𝑓fitalic_f**C**: The outer function g𝑔gitalic_g is then responsible for learning the relationship between these features and the target function f𝑓fitalic_f. This separation of feature engineering and learning is a key advantage of K-GAM networks, enabling efficient training and inference.
ACB
ACB
BAC
ABC
Selection 3
**A**: Consequently, based on Eq. (10), the necessary ϵitalic-ϵ\epsilonitalic_ϵ and δ𝛿\deltaitalic_δ for a given ℒℒ\mathscr{L}script_L to qualify as a PAC learner can be analyzed using the accessible physical quantity, i.e., the learning probability (as shown later)**B**: Here, the crucial point is that the theoretical statement has been transformed into practically assessable metrics**C**: For unspecified learner ℒℒ\mathscr{L}script_L, the theoretical framework of the computational learning remains valid, allowing the PAC learnability to be specified by the size of the training data (using Theorem. 1).
ABC
CBA
BAC
BCA
Selection 3
**A**: The paper makes a number of significant contributions to both the fiducial and causal inference literature**B**: First, we propose fiducial based acceptance sampling algorithm to quantify uncertainty of bounds for a variety of causal estimands under various assumptions by leveraging a binary IV**C**: Second, we establish a novel Bernstein–von Mises theorem that verifies the frequentist validity of the proposed fiducial confidence intervals. As a consequence of the Bernstein–von Mises theorem, the proposed confidence intervals provide asymptotically correct coverage for the lower and upper bounds. Third, as a by-product, the acceptance rate of the proposed sampling algorithm is a natural estimator of fiducial probability of the observed data agreeing with the IV assumptions. Therefore, a high acceptance rate indicates high trust in feasibility of the IV assumption, while acceptance rate near 0 suggests that the IV assumptions are likely violated.
BAC
CBA
ACB
ABC
Selection 4
**A**: (2024) propose more relaxed assumptions regarding the variance of contexts. However, these approaches result in a regret that grows exponentially with the number of arms K𝐾Kitalic_K (as shown in Table 1).**B**: For example, Wang et al**C**: (2023a; b); Yang et al
BAC
CAB
BAC
ACB
Selection 2
**A**: One of the core assumptions of the H0subscript𝐻0H_{0}italic_H start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT algorithm is that the performances of all drafted players count for the team that drafted them (Rosenof, 2024b). This assumption can be problematic for several reasons, one of which is especially problematic for Rotisserie. Managers do not always consistently set their line-ups, especially when they are not performing well enough to compete for a top placement**B**: They could still win fantasy points in them over managers who are actively competing. This perhaps suggests that a manager hoping to perform well across the board should prioritize the percentage statistics, since those will be more difficult to win fantasy points for. One could also make the argument that it makes the counting statistics less attractive to punt, since punting a counting statistic would forfeit the almost-free points to be earned against inattentive managers in that category. **C**: At the end of a Rotisserie season, there may be a number of managers who are so far behind that they have effectively no chance to win. They are less likely to set their line-ups properly, thereby falling even further behind on counting statistics. However, these managers would have no disadvantage in the percentage statistics
CAB
ACB
CAB
CAB
Selection 2
**A**: Beyond the differences in reachability distance calculations, the EILOF algorithm only computes the LOF score for the new data point, avoiding recalculation of LOF scores for existing points**B**: This design strikes a balance between computational efficiency and accuracy in LOF calculations**C**: However, it is important to emphasize that the accuracy of precise LOF score calculations is distinct from the accuracy of detection results. Given that datasets inherently contain noise, minor deviations in LOF score computations do not necessarily degrade detection performance. In fact, this approach may often yield better results by reducing overfitting and potentially improving the accuracy of outlier detection.
BCA
ABC
CAB
BAC
Selection 2
**A**: Figure 6 presents the actual coverage and widths of the confidence intervals under two different correlation structures**B**: We observe that, in general, the coverage for dependent studies is nearly as good as in the independent case. However, when the estimators are equally correlated with ρ𝜌\rhoitalic_ρ around 0.250.250.250.25, the coverage slightly falls below the desired level. Additionally, HCCT demonstrates better robustness when conducted at a 0.010.010.010.01 significance level.**C**: To numerically compute the confidence intervals, we apply Brent’s method [Brent 1971]—the default optimization and root-finding algorithm for scalar functions in the Python package SciPy—to find both the minimizer of the score and the root of (3.2). We then consider the same simulation settings as in Section 2.5 to obtain confidence intervals for θ𝜃\thetaitalic_θ using the approach discussed above
CBA
CBA
CBA
BCA
Selection 4
**A**: Subsequently, Section 3 presents the results, along with a comprehensive discussion revolving around the forecast accuracy of each transformation**B**: Lastly, Section 4 concludes the paper by summarizing key findings and suggesting possible extensions for future research. **C**: Section 2 provides a detailed description of methodology, covering the key steps and analytical framework employed in this study
BCA
CBA
CBA
CAB
Selection 1
**A**: We have also tested several possible scenarios and variants, such as different noise perturbations in the observation model and the use of mini-batches**B**: The dimension of the space is: **C**: We test the proposed scheme in numerous different numerical examples, comparing it with different benchmark schemes
ABC
BAC
ACB
BCA
Selection 4
**A**: The block structure of the model is designed to capture seasonality in precipitation data 19. Parameter estimation is carried out using a Bayesian approach, as described by 19. **B**: This work aims to develop a spatio-temporal regression model with a block structure, incorporating fixed and random functional variables as predictors for the response variable, using the Functional Data Analysis (FDA) approach 24. Each observation is modeled by fixed and random spatio-temporal effects, which are approximated by linear combinations of tensor product of B-spline bases evaluated in time and space 9**C**: The covariance structures considered account for spatio-temporal correlations among measurements within the same block and repetition. The expansion in cubic B-splines accommodates intra-block spatial and temporal correlations
ABC
BCA
CAB
ABC
Selection 3
**A**: At the high temperature of 1.5, the reviews become notably more expressive, with shifts in tone**B**: Phrases such as ‘exceeded my expectations’ and ‘a great choice for a quick getaway’ make the text feel more enthusiastic compared to the original. This example suggests that a temperature setting that is too high may alter the input text excessively, resulting in a significant disparity between 𝒪𝒪\mathcal{O}caligraphic_O and 𝒢𝒢\mathcal{G}caligraphic_G**C**: Conversely, a temperature setting that is too low may struggle to paraphrase the input with minimal changes, leading to outputs that lack human-like qualities.
CAB
ACB
BAC
ABC
Selection 4
**A**: Tables S.9-S.10 reports the accuracy of selecting 𝒦𝒦\mathcal{K}caligraphic_K under Bernoulli and Poisson-based DDEs, which correspond to Figure S.3 and Figure S.4**B**: Similarly, Tables S.11-S.12 display the results for selecting K(2)superscript𝐾2K^{(2)}italic_K start_POSTSUPERSCRIPT ( 2 ) end_POSTSUPERSCRIPT, which corresponds to Figures S.5 and S.6.**C**: Accuracy values for Normal-based DDEs are omitted, as all methods demonstrated near-perfect accuracy in this setting
ACB
CAB
CAB
ABC
Selection 1
**A**: Based on this finding, we propose Generative Gradual Domain Adaptation with Optimal Transport (GOAT). At a high-level, GOAT contains the following steps:**B**: The above insight is particularly helpful under the situation where intermediate domains are missing or scarce, which is often the case in real-world applications**C**: It inspires a natural method to generate more intermediate domains useful for GDA
ABC
ACB
CAB
ACB
Selection 3
**A**: The global reach-level estimation supplied in a previous study [47] provides climatological discharge data in our study. We note that sensitivity experiments (driving the model with 10 discharge values ranging from the average discharge and bankfull discharge) demonstrate that both the astronomical tide and storm tide model responses are not sensitive to the upstream discharge forcing, as the locations of our tidal stations (and VS) are far from regions influenced by riverine dynamics. The same assumption also applies to the synthetic TC simulations.**B**: Accurate upstream riverine discharge inputs are challenging in the complex deltaic model because long-term reliable observations for the Bengal Delta river network are lacking**C**: A common approach is to assume an average climatological upstream discharge for astronomical tide and cyclone-induced storm tide periods, providing a constant hourly input to drive the model at the upstream boundary [48, 20]
CAB
BAC
BAC
ACB
Selection 1
**A**: We run our MCMC algorithm for 10,000 iterations, discarding the first 4,000 as a burn-in, and thinning every twelfth sample. A detailed discussion on the convergence of MCMC sampling can be found in Appendix D.1**B**: We run our proposed approach as described in Section 3 targeting both marginal and heterogeneous treatment effects to evaluate the extent to which ambient air pollution affects mortality, and whether this effect varies by characteristics of zip-codes. We run our model for each year separately using the prior year exposures as the pollutants of interest, and therefore will present results for all years between 2000 and 2016**C**: Overall, we find that convergence diagnostics are very good for the average treatment effect across all years studied. Convergence is slightly worse for MTE-VIM values, though this is likely caused by multi-modality of the posterior distribution, rather than an issue of MCMC sampling, and is still within an acceptable range.
BCA
BCA
BAC
CAB
Selection 3
**A**: The average MASE and average rank of Online, Online(E), and Offline(1) are better than that of Original**B**: Overall, only Online, Online(E), and Offline(1) show an effectiveness above 60%. **C**: We illustrate the effectiveness of each data augmentation method in Figure 6, which shows the ratio of times each approach outperforms the Original across the 126 problem variants (6 datasets times 3 neural networks times 7 synthetic data generators)
CBA
ACB
BAC
BCA
Selection 2
**A**: This study examined the impact of three classroom types on elementary school students’ academic achievement: small classes (13-17 students per teacher), regular classes (22-25 students per teacher), and regular classes with a full-time aide. **B**: We revisit Tennessee’s Student/Teacher Achievement Ratio (STAR) Project (Word et al.,, 1990) to illustrate the use of IBD and BIBD**C**: The publicly available data can be accessed at https://doi.org/10.7910/DVN/SIWH9F (Achilles et al.,, 2008)
ABC
BCA
CAB
BCA
Selection 3
**A**: In addition, machine learning methods may perform unsatisfactorily with a relatively small or moderate sample size**B**: Weighting does not directly ensure covariate balance across treatment groups and is less intuitively appealing to practitioners as matching. Therefore, it is imperative to develop a sensible approach for learning the optimal policy with ensured covariate balance property and desirable efficiency and robustness performance in finite samples. Learning policies using matching techniques is a natural choice.**C**: Nevertheless, the procedure of estimating the nuisance functions may introduce instability particularly when the estimated propensity score is extreme
BAC
BCA
CAB
CAB
Selection 2
**A**: Section 3 is devoted to Bank of America portfolios for investment-grade and high-yield corporate bonds: their rates (yields), spreads, and total returns. We again show that dividing autoregression innovations by VIX improves them, and again fit the model (2). Finally, we motivate and fit several alternative models for total bond returns. In Theorems 3 and 4, we state and prove long-term stability for the combined model (18) of volatility, rates/spreads, and returns.**B**: The rest of the article is split into two main parts. Section 2 is devoted to Moody’s BAA and AAA-rated bond spreads versus 10-year Treasury rates. We show that dividing autoregression innovations by VIX improves them by making them closer to i.i.d**C**: normal. We fit the model (2), and in Theorems 1 and 2. We prove long-term stability for the combined model (1) and (2)
CAB
BCA
ACB
ABC
Selection 1
**A**: Second, we show that there are settings with recoverable structure in which any intervention rule that robustly increases total surplus is equivalent, in terms of how it allocates surplus, to the interventions identified by our main result. These tightness results show that robust interventions must, in general, be tailored to the marketplace—there is no simple rule of thumb that always works. On the other hand, under conditions that we identify, it is possible to tailor rules well, despite large uncertainty in many aspects of demand.**B**: Furthermore, our results provide tight conditions for robust intervention in the following sense**C**: First, the property of recoverable structure cannot be dispensed with; there are reasonable demand systems without recoverable structure for which the authority cannot robustly increase total surplus
BCA
ACB
CAB
BAC
Selection 3
**A**: By focusing on the “mfeat-large” dataset in the main paper, we aim to illustrate the benefits of our proposed algorithms in a complex, multi-view context**B**: To save space, only the concatenated and multi-view subplots are included for the multi-class plot; complete results can be found in the Appendix. **C**: Results on the other datasets are included in the Appendix G
BAC
ABC
ACB
CBA
Selection 3
**A**: In setting 1, we run 100 epochs, leverage the gradient descent method with Adam optimizer, and set the learning rate to 0.05**B**: In setting 2, we run 100 epochs, leverage gradient descent method with Adam optimizer, and set the learning rate to 0.1.**C**: We train hℎhitalic_h, and f𝑓fitalic_f as follows
ACB
ACB
BCA
ABC
Selection 3
**A**: We plan to develop statistically valid and more accurate inferential tools, and explore new real-world applications within these ongoing initiatives. **B**: The authors’ team is currently pursuing several related projects, including extensions to advanced nonparametric domains such as functional data analysis, survival analysis, Bayesian nonparametrics, quantile regression, and isotonic regression**C**: These projects aim to address target tasks through innovative data integration frameworks
ABC
ACB
BCA
CAB
Selection 4
**A**: Properties of the Archimedean copulas have been studied extensively under continuous distribution functions but limited work has been done for discrete or mixed distribution functions**B**: This is the approach taken in this paper.**C**: The study of copula’s properties for non-continuous distributions can be approached via topological arguments, like Sklar’s Theorem see [2]
ABC
ACB
CAB
CBA
Selection 2
**A**: The popular maximum mean discrepancies (Oates et al., 2017) have been coupled with kernel methods, for which Sobolev properties cover a fundamental role, and the reader is referred to the most recent contribution in this direction by Barp et al. (2022). The class of kernels proposed in this paper is a very good candidate in all these directions. Additionally, the property of compact support allows for considerable computational gains while preserving the required smoothness properties. Hence, extension of the previously mentioned directions to this class becomes imperative. It is not clear to the authors how the hole effect will play (in any) role within these research direction, although this aspect deserves attention.**B**: (2013) and Narcowich et al. (2006). As mentioned in Korte-Stapff et al. (2023) (see the references therein), the Sobolev properties turn to be fundamental within uncertainty quantification in nonparametric methods**C**: Theoretical results related to Gaussian regression in machine learning are strongly connected to the Sobolev properties, and we refer the reader to Korte-Stapff et al. (2023). For example, Sobolev smoothness is of crucial importance in Bayesian contraction rates (Van Der Vaart and Van Zanten, 2011). Similar results where Sobolev rates pop up are contained in Schaback and Wendland (2006), Scheuerer et al
ABC
BCA
ABC
CBA
Selection 4
**A**: Thus, given sufficient computational resources, the score function method can be both fast and ensure accuracy and unbiasedness. **B**: Moreover, the score function method is unbiased and can be implemented in parallel**C**: On the other hand, our score function method can sample from posterior distributions and perform UQ also without relying on MCMC
CBA
CAB
BAC
CAB
Selection 1
**A**: Unlike CRPS (see Figure 1), optimal transport compares CDFs not vertically but horizontally, see the illustration in Figure 2.**B**: The lemma makes it possible to consider the probability distribution’s quantile function (inverse cumulative distribution function) instead of the probability distribution for computation of 1-dimensional optimal transport cost**C**: The result was initially proved by [29] and then rediscovered several times, see [6, 31, 38]
CBA
ACB
ABC
ACB
Selection 1
**A**: We are allowed m𝑚mitalic_m measurement, so we have blue liquid of volume σ−2⁢msuperscript𝜎2𝑚\sigma^{-2}mitalic_σ start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT italic_m units at our disposal**B**: When**C**: λi−1superscriptsubscript𝜆𝑖1\lambda_{i}^{-1}italic_λ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT volume units
CBA
CAB
BCA
BAC
Selection 3
**A**: The suggested approach has been applied to the 2022 FIFA World Cup**B**: Ensuring the attractiveness and competitiveness of the matches played in the last round of the group stage seems to require a more fundamental change in the tournament format such as the following: **C**: Its design could have been improved by changing the random group labelling and group match schedules but neither intervention mitigates the risk of tanking considerably
CAB
BCA
CAB
ACB
Selection 4
**A**: In particular, we work with TTN built from low-rank tensors**B**: Local low-rank tensors then reduce the number of components in each layer of the TTN by a factor of 1/b1𝑏1/b1 / italic_b. The output of the tensor in the top layer is the decision function. The resulting low-rank TTN classifiers have several advantages:**C**: The image classifier maps each pixel into one component of an exponentially big tensor product space
ABC
BAC
BAC
ACB
Selection 4
**A**: Along the line of work of generative models, studies on generative models and unsupervised learning have made headway into better understanding the properties of identifiability of a latent representation which is meant to capture some underlying factors of variation from which the data was generated**B**: Namely, independent component analysis (ICA) is the classical approach for learning a latent representation for which there are identifiability results, where the generating process is assumed to consist of a linear mixing function (Comon, 1994; Bell & Sejnowski, 1995; Hyvärinen et al., 2001)**C**: A major problem in nonlinear ICA is that, without assumption on either source distribution or generating process, the model is seriously unidentifiable (Hyvärinen & Pajunen, 1999). Recent breakthroughs introduced auxiliary variables, e.g., domain indexes, to advance the identifiability results (Hyvarinen et al., 2019; Sorrenson et al., 2020; Hälvä & Hyvarinen, 2020; Lachapelle et al., 2021; Khemakhem et al., 2020a; von Kügelgen et al., 2021; Lu et al., 2020). These works aim to identify and disentangle the components of the latent representation while assuming that all of them are changing across domains.
BCA
ACB
BAC
ABC
Selection 4
**A**: Consequently, we leverage Hamming distance and Lin1 in the methods based on dimensionality reduction in the performance comparison. **B**: In light of above experimental results, Hamming and Lin1 distances display distinguished performance with respect to at least one or two of the three count metrics (as represented by the outliers in Figure 3c): number of correctly identified ODS, number of correctly identified CRDS, and the size of intersection set**C**: Nevertheless, all these methods cannot correctly identify all CRDSs, thus failing to meet the trustworthy standard required to discern random data
ABC
CAB
BAC
ACB
Selection 2
**A**: denote by G^∈𝒞⁢(Sμ¯⁢ℳ,ℝ)^𝐺𝒞subscript𝑆¯𝜇ℳℝ{\widehat{G}}\in\mathcal{C}(S_{\bar{\mu}}\mathcal{M},\mathbb{R})over^ start_ARG italic_G end_ARG ∈ caligraphic_C ( italic_S start_POSTSUBSCRIPT over¯ start_ARG italic_μ end_ARG end_POSTSUBSCRIPT caligraphic_M , blackboard_R )**B**: Since G^^𝐺{\widehat{G}}over^ start_ARG italic_G end_ARG is the uniform limit of continuous functions it is also continuous**C**: Since both G^^𝐺{\widehat{G}}over^ start_ARG italic_G end_ARG and G𝐺Gitalic_G are continuous, they are determined completely by their
BAC
ABC
CAB
BCA
Selection 2
**A**: The resulting graph has a maximum node degree of 30, a median degree of 19, and about 18.4% of node pairs connected. **B**: To estimate a suitable graph G𝐺Gitalic_G, we use a modified GLasso that leaves within-sector edges unpenalised**C**: This approach incorporates industry knowledge to produce a super-graph G^^𝐺\widehat{G}over^ start_ARG italic_G end_ARG that may include extra edges but still yields an adequate GGM
ABC
CBA
CAB
BAC
Selection 3
**A**: Here, LkDsuperscriptsubscript𝐿𝑘𝐷L_{k}^{D}italic_L start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_D end_POSTSUPERSCRIPT denotes the operator defined in Remark 3.1**B**: and Caponnetto and De Vito [5])**C**: It turns out that this is the
CBA
ABC
BAC
CBA
Selection 3
**A**: Agresti (1999) analyses confidence intervals for the log odds ratio when independent binomial sampling is applied to each population, and Bandyopadhyay et al. (2017) address the more general setting where the two populations are sampled using different combinations of binomial and inverse binomial sampling.**B**: Cho (2007) estimates the probability ratio under a loss function defined as the sum of squared error and cost proportional to the number of observations, with samples taken in pairs, one from each population; and proposes a sequential procedure which is asymptotically optimal when the cost per sample is small. Cho (2013) presents a sequential estimator for a probability ratio when the proportion of sample sizes is specified, and studies its asymptotic properties**C**: Kokaew et al. (2021, 2023) consider correlation between observations of the two populations, assuming sample sets of fixed size; and propose estimators of the probability ratio or its logarithm, for which they derive asymptotic confidence intervals
ABC
ACB
BCA
CAB
Selection 4
**A**: Results on both synthetic and real-world experiments validate the effectiveness of the proposed framework, and we use detailed analysis to study its underlying behavior. **B**: By constructing a tailored combinatorial graph and sampling subgraphs progressively with a recursive algorithm, we are able to traverse the combinatorial space and optimize the objective function using BO in a sample-efficient manner**C**: In this work, we introduce a novel Bayesian optimization framework to optimize black-boxed functions defined on node subsets in a generic and potentially unknown graph
BAC
CBA
ACB
ACB
Selection 2
**A**: Now, the tests are applied to the pre-construction data**B**: In this case, the estimated p-values are 0.2940.2940.2940.294 for the new test, and 0.8050.8050.8050.805 for the test by [5] (again, B=1000𝐵1000B=1000italic_B = 1000 bootstrap resamples were used for approximating the two p-values)**C**: Consequently the two tests confirm the unimodality of the pre-construction data. Since the post-construction observations have been shown to be multimodal, this also points out that the construction of the two new wind farms caused a change in the migratory raptor routes in order to avoid them.
BCA
BAC
ABC
CBA
Selection 3
**A**: The literature on the estimation of bid-ask spreads from a time series of displayed prices started with Roll’s estimator, which is based on the empirical covariance of successive price increments [63]. The observable price is considered to be the sum of the mid price, that is the average between the bid and the ask, and a microstructure noise corresponding to a discrete variable equal to −S/2𝑆2-S/2- italic_S / 2 or S/2𝑆2S/2italic_S / 2, where S𝑆Sitalic_S is the bid-ask spread**B**: A straightforward correction of the spread estimator makes it possible to take this simple situation into account [7]. But a more general serial dependence introduces a bias for all the existing estimators cited above. **C**: Many alternatives to Roll’s estimator have been proposed since, exploring approaches based on high-low ranges instead of cross moments [28] and refinements related to overnight price movements [1] or infrequent trading [7]. In this last case, when two consecutive observations of the time series of prices are the same because of an absence of intermediate trades, a spurious correlation of the microstructure noise appears
BAC
ACB
BAC
CAB
Selection 2
**A**: Instead of using the standard ELBO loss, we propose to regularize it with a distance-preserving loss function, which utilizes the spatial context as auxiliary information to enforce the learned representation to be geometrically similar to the reference dataset.**B**: As an initial step toward tackling this question, we propose a generic representation learning and transfer learning framework enabling the inference of spatial context from purely gene expression data (e.g., collected from scRNA-seq) by drawing reference from datasets where gene expression is paired with spatial information (e.g., collected from spatial transcriptomics)**C**: This procedure requires little or even no reliance on ST technologies depending on the type of reference data that the user intends to use, which can even come from existing publicly available databases. Specifically, our representation learning framework is based on the variational autoencoder (VAE) models [23], consisting of an encoder and a decoder network and trained on the reference datasets
BCA
BAC
CBA
CAB
Selection 4
**A**: Further samples are provided in Section B.6**B**: Figure 11: (a) Uncurated samples from the \acFAE generative model for the pressure field p𝑝pitalic_p**C**: (b) The distributions of quantities of interest computed using the \acFAE generative model closely agree with the ground truth.
BAC
ABC
ACB
ACB
Selection 1
**A**: Another future direction is to support POEM with the ability to handle label shift at test time. This challenge is exemplified by scenarios where the source domain has a balanced label distribution, but the test domain becomes unbalanced**B**: The first builds on ideas from [101], particularly their prediction-balanced reservoir sampling technique. This method can be used to approximately simulate an i.i.d. data stream from a non-i.i.d. stream in a class-balanced manner, potentially reducing our martingale process’s sensitivity to label shifts.**C**: In such cases, our current monitoring tool might detect this label shift and trigger unnecessary adaptation in the absence of covariate shift. This underscores the need for a monitoring tool that remains invariant to label shifts. To address this challenge, one may consider two potential approaches
ABC
BAC
CAB
ACB
Selection 4
**A**: In Chapter 4, we present the results of a simulation study that compares the performance of several location estimators. All proofs, figures, and tables are included in the appendix, and the corresponding code is available at GitHub.**B**: In Chapter 2, we review the concept of functional data depth**C**: In Chapter 3, we define the partially observable functional α−limit-from𝛼\alpha-italic_α -trimmed mean based on the partially observable functional depth and propose the strong consistency property of trimmed mean
BCA
CAB
BAC
ACB
Selection 2
**A**: The modeling accuracy for crime was enhanced compared to the conventional spatial model (S), ignoring the temporal variation**B**: Based on the empirical result, the determinants of larceny risk vary spatially but exhibit less temporal variation. This suggests that effective countermeasures need to be tailored to specific neighborhoods. **C**: In summary, the proposed method effectively identified space-time patterns in multiple time scales within each coefficient in a computationally efficient manner
CBA
BAC
BCA
BAC
Selection 3
**A**: Based on the data generation process, studies on the learning problem in RMDP can be primarily divided into three categories. RMDP with generative model has been studied in Zhou et al., (2021); Yang et al., (2022); Panaganti and Kalathil, (2022); Shi et al., (2024),**B**: Robust RL**C**: Robust MDP (RMDP) was first introduced by Iyengar, (2005); Nilim and El Ghaoui, (2005). Most of RMDP research in the literature focuses on the planning problem (Xu and Mannor, , 2010; Wiesemann et al., , 2013; Tamar et al., , 2014; Yu and Xu, , 2015; Mannor et al., , 2016; Petrik and Russel, , 2019; Wang et al., 2023a, ; Wang et al., 2023b, ), providing computationally efficient algorithms
BAC
ACB
BAC
CAB
Selection 4
**A**: ODE solvers with adaptive step sizes aim to minimize integration error, which is crucial for accuracy**B**: However, these solvers can slow down training when the ODE becomes stiff, as step sizes shrink and make integration time-consuming [3, 7]. This is especially problematic in GFE-based training.**C**: Using a gradient flow for encoding can be computationally demanding
BCA
CAB
ABC
ACB
Selection 1
**A**: Simulates the bootstrap distribution of the given estimator statistic, which must have been defined as a a function with two arguments: statistic(data, indices)**B**: For multidimensonal data, each row in data is assumed to be one data point**C**: Additional arguments are passed to statistic.
ABC
BAC
ACB
ACB
Selection 1
**A**: Second, it is hard to incorporate useful features, such as seasonality, into a BTYD model, as the model itself is a pure vintage based model. Thirdly, for new users with less transactions, the model does not have enough data to discriminate between potentially high value users and low value ones. Finally, if there are several lines of business, e.g. Uber Ride and Uber Drive, each has to be modeled independently, with a loss of available information. Therefore a more expressive model form is often necessary to leverage a wider range of signals available to the forecaster. **B**: Traditional approaches to predicting non-contractual user level customer values rely on recency, frequency and monetary value (RFM)(doi:10.1509/jmkr.2005.42.4.415, ) from the user’s past history to extrapolate future purchasing behaviors. A prominent model family in this class is a set of parametric generative models aptly named Buy Till You Die (BTYD) (doi:10.1509/jmkr.2005.42.4.415, ) (doi:10.1287/mksc.1080.0482, ). The BTYD approach breaks the customer value forecast into 3 separate modeling objects, the transaction frequency, the transaction amount, and the duration of staying active, and models each independently by different distributions with user level heterogeneous parameters following common prior distributions across all the users, and finally combine them to calculate the forecast value**C**: While the approach is elegant and parsimonious, there are a number of drawbacks that lead to worse performance and limited scope in practical situations. First, the assumption that the transaction amount, the transaction frequency, and the active duration are independent from each other rarely holds in reality. Frequent customers typically are more satisfying customers or have better intent in using the service and they stay with the service longer
BAC
ABC
ABC
CAB
Selection 4
**A**: Next, we train GCNs and residual GCNs with 0,20,40,…,30002040…3000,20,40,\dots,3000 , 20 , 40 , … , 300 message-passing layers, and compare their performance on the training, validation, and test sets**B**: The training loss and training classification accuracy are shown in Figure 3, while the classification accuracies on the validation and test sets are displayed in Figure 4**C**: In both figures, solid lines represent the average values, and shaded regions indicate the standard deviation.
ABC
ACB
BCA
CAB
Selection 1
**A**: In Section 3, we present the proposed PPLS-BO algorithm for adaptive sampling in reduced dimension. We detail the formulations used to compute the posterior probability density of the GP in reduced dimension, and provide the pseudocode of the PPLS-BO algorithm**B**: Three examples are given in Section 4, demonstrating the improved convergence of PPLS-BO when compared to PLS-BO and classical BO. We demonstrate the algorithms’ versatility through design optimisation of a complex manufacturing example. Finally, Section 5 concludes the paper and discusses promising directions for further research.**C**: This paper is structured as follows. In Section 2, we review PPLS for dimensionality reduction and BO, consisting of GPs and acquisition functions for adaptive sampling
ABC
ACB
BCA
ABC
Selection 3
**A**: In this context, we prove some new results (Proposition 2.1 and Theorem 2.4).**B**: We also elaborate more on the concepts of σ𝜎\sigmaitalic_σ-fields and discernment**C**: In this section, we reproduce the definitions of learning and knowledge acquisition in [24]
CBA
CAB
BAC
CAB
Selection 1
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
4