GVKun编程网logo

Android Canvas Off Screen Drawing Performance

12

对于AndroidCanvasOffScreenDrawingPerformance感兴趣的读者,本文将会是一篇不错的选择,并为您提供关于(转)EnsembleMethodsforDeepLearni

对于Android Canvas Off Screen Drawing Performance感兴趣的读者,本文将会是一篇不错的选择,并为您提供关于(转) Ensemble Methods for Deep Learning Neural Networks to Reduce Variance and Improve Performance、Android canvas drawBitmap方法详解及实例、android canvas 的drawLine、Android Canvas.drawText无法呈现的有用信息。

本文目录一览:

Android Canvas Off Screen Drawing Performance

Android Canvas Off Screen Drawing Performance

我正在使用Canvas元素开发 Android游戏.我在大型游戏地图上绘制了许多图形元素(精灵).这些元素由标准图形函数绘制,如drawLine,drawPath,drawArc等.

测试它们是否在屏幕上并不难.因此,如果它们不在屏幕上,我可以完全跳过它们的绘图程序.但即使这样也有cpu成本.我想知道Android图形库是否可以比我更快地做到这一点?

简而言之,我是否应该尝试绘制所有内容,即使它们完全在屏幕坐标之外,相信Android图形库会照顾它们并且不会花太多cpu试图绘制它们,或者我应该自己检查它们的绘图区域矩形以及它们是否是完全没有屏幕,跳过绘图程序?哪种方法适当?哪一个应该更快?

p.s:我的目标是Android v2.1及更高版本.

解决方法

从一个不完全科学的测试中我绘制了比屏幕更大的区域绘制位图,我发现事先检查Bitmap是否在屏幕上似乎并没有相当大的不同.

在一个测试中,我将Rect设置为屏幕大小,并将另一个Rect设置为Bitmap的位置,并在绘制之前检查Rect.intersects().在另一个测试中我刚绘制了Bitmap.在300-ish抽签之后,没有明显的趋势 – 一些是单向的,另一些是另一个.我每帧都尝试过300次测试,帧与帧之间的差异远大于已检查和未检查绘图之间的差异.

从那以后我认为Android检查其原生代码中的界限是安全的,或者你期望有相当大的差异.我会分享我的测试代码,但我认为你在自己的情况下进行自己的测试是有意义的.可能的点与Bitmaps的行为不同,或者您的绘画或画布的某些其他功能会改变事物.

希望能帮助你(或者其他人偶然发现这个问题).

(转) Ensemble Methods for Deep Learning Neural Networks to Reduce Variance and Improve Performance

(转) Ensemble Methods for Deep Learning Neural Networks to Reduce Variance and Improve Performance

Ensemble Methods for Deep Learning Neural Networks to Reduce Variance and Improve Performance

2018-12-19 13:02:45

 

This blog is copied from: https://machinelearningmastery.com/ensemble-methods-for-deep-learning-neural-networks/ 

 

Deep learning neural networks are nonlinear methods.

They offer increased flexibility and can scale in proportion to the amount of training data available. A downside of this flexibility is that they learn via a stochastic training algorithm which means that they are sensitive to the specifics of the training data and may find a different set of weights each time they are trained, which in turn produce different predictions.

Generally, this is referred to as neural networks having a high variance and it can be frustrating when trying to develop a final model to use for making predictions.

A successful approach to reducing the variance of neural network models is to train multiple models instead of a single model and to combine the predictions from these models. This is called ensemble learning and not only reduces the variance of predictions but also can result in predictions that are better than any single model.

In this post, you will discover methods for deep learning neural networks to reduce variance and improve prediction performance.

After reading this post, you will know:

  • Neural network models are nonlinear and have a high variance, which can be frustrating when preparing a final model for making predictions.
  • Ensemble learning combines the predictions from multiple neural network models to reduce the variance of predictions and reduce generalization error.
  • Techniques for ensemble learning can be grouped by the element that is varied, such as training data, the model, and how predictions are combined.

Let’s get started.

Ensemble Methods to Reduce Variance and Improve Performance of Deep Learning Neural Networks

Ensemble Methods to Reduce Variance and Improve Performance of Deep Learning Neural Networks
Photo by University of San Francisco’s Performing Arts, some rights reserved.

Overview

This tutorial is divided into four parts; they are:

  1. High Variance of Neural Network Models
  2. Reduce Variance Using an Ensemble of Models
  3. How to Ensemble Neural Network Models
  4. Summary of Ensemble Techniques

High Variance of Neural Network Models

Training deep neural networks can be very computationally expensive.

Very deep networks trained on millions of examples may take days, weeks, and sometimes months to train.

Google’s baseline model […] was a deep convolutional neural network […] that had been trained for about six months using asynchronous stochastic gradient descent on a large number of cores.

— Distilling the Knowledge in a Neural Network, 2015.

After the investment of so much time and resources, there is no guarantee that the final model will have low generalization error, performing well on examples not seen during training.

… train many different candidate networks and then to select the best, […] and to discard the rest. There are two disadvantages with such an approach. First, all of the effort involved in training the remaining networks is wasted. Second, […] the network which had best performance on the validation set might not be the one with the best performance on new test data.

— Pages 364-365, Neural Networks for Pattern Recognition, 1995.

Neural network models are a nonlinear method. This means that they can learn complex nonlinear relationships in the data. A downside of this flexibility is that they are sensitive to initial conditions, both in terms of the initial random weights and in terms of the statistical noise in the training dataset.

This stochastic nature of the learning algorithm means that each time a neural network model is trained, it may learn a slightly (or dramatically) different version of the mapping function from inputs to outputs, that in turn will have different performance on the training and holdout datasets.

As such, we can think of a neural network as a method that has a low bias and high variance. Even when trained on large datasets to satisfy the high variance, having any variance in a final model that is intended to be used to make predictions can be frustrating.

 

Want Better Results with Deep Learning?

Take my free 7-day email crash course now (with sample code).

Click to sign-up and also get a free PDF Ebook version of the course.

Download Your FREE Mini-Course

 

Reduce Variance Using an Ensemble of Models

A solution to the high variance of neural networks is to train multiple models and combine their predictions.

The idea is to combine the predictions from multiple good but different models.

A good model has skill, meaning that its predictions are better than random chance. Importantly, the models must be good in different ways; they must make different prediction errors.

The reason that model averaging works is that different models will usually not make all the same errors on the test set.

— Page 256, Deep Learning, 2016.

Combining the predictions from multiple neural networks adds a bias that in turn counters the variance of a single trained neural network model. The results are predictions that are less sensitive to the specifics of the training data, choice of training scheme, and the serendipity of a single training run.

In addition to reducing the variance in the prediction, the ensemble can also result in better predictions than any single best model.

… the performance of a committee can be better than the performance of the best single network used in isolation.

— Page 365, Neural Networks for Pattern Recognition, 1995.

This approach belongs to a general class of methods called “ensemble learning” that describes methods that attempt to make the best use of the predictions from multiple models prepared for the same problem.

Generally, ensemble learning involves training more than one network on the same dataset, then using each of the trained models to make a prediction before combining the predictions in some way to make a final outcome or prediction.

In fact, ensembling of models is a standard approach in applied machine learning to ensure that the most stable and best possible prediction is made.

For example, Alex Krizhevsky, et al. in their famous 2012 paper titled “Imagenet classification with deep convolutional neural networks” that introduced very deep convolutional neural networks for photo classification (i.e. AlexNet) used model averaging across multiple well-performing CNN models to achieve state-of-the-art results at the time. Performance of one model was compared to ensemble predictions averaged over two, five, and seven different models.

Averaging the predictions of five similar CNNs gives an error rate of 16.4%. […] Averaging the predictions of two CNNs that were pre-trained […] with the aforementioned five CNNs gives an error rate of 15.3%.

Ensembling is also the approach used by winners in machine learning competitions.

Another powerful technique for obtaining the best possible results on a task is model ensembling. […] If you look at machine-learning competitions, in particular on Kaggle, you’ll see that the winners use very large ensembles of models that inevitably beat any single model, no matter how good.

— Page 264, Deep Learning With Python, 2017.

How to Ensemble Neural Network Models

Perhaps the oldest and still most commonly used ensembling approach for neural networks is called a “committee of networks.”

A collection of networks with the same configuration and different initial random weights is trained on the same dataset. Each model is then used to make a prediction and the actual prediction is calculated as the average of the predictions.

The number of models in the ensemble is often kept small both because of the computational expense in training models and because of the diminishing returns in performance from adding more ensemble members. Ensembles may be as small as three, five, or 10 trained models.

The field of ensemble learning is well studied and there are many variations on this simple theme.

It can be helpful to think of varying each of the three major elements of the ensemble method; for example:

  • Training Data: Vary the choice of data used to train each model in the ensemble.
  • Ensemble Models: Vary the choice of the models used in the ensemble.
  • Combinations: Vary the choice of the way that outcomes from ensemble members are combined.

Let’s take a closer look at each element in turn.

Varying Training Data

The data used to train each member of the ensemble can be varied.

The simplest approach would be to use k-fold cross-validation to estimate the generalization error of the chosen model configuration. In this procedure, k different models are trained on k different subsets of the training data. These k models can then be saved and used as members of an ensemble.

Another popular approach involves resampling the training dataset with replacement, then training a network using the resampled dataset. The resampling procedure means that the composition of each training dataset is different with the possibility of duplicated examples allowing the model trained on the dataset to have a slightly different expectation of the density of the samples, and in turn different generalization error.

This approach is called bootstrap aggregation, or bagging for short, and was designed for use with unpruned decision trees that have high variance and low bias. Typically a large number of decision trees are used, such as hundreds or thousands, given that they are fast to prepare.

… a natural way to reduce the variance and hence increase the prediction accuracy of a statistical learning method is to take many training sets from the population, build a separate prediction model using each training set, and average the resulting predictions. […] Of course, this is not practical because we generally do not have access to multiple training sets. Instead, we can bootstrap, by taking repeated samples from the (single) training data set.

— Pages 216-317, An Introduction to Statistical Learning with Applications in R, 2013.

An equivalent approach might be to use a smaller subset of the training dataset without regularization to allow faster training and some overfitting.

The desire for slightly under-optimized models applies to the selection of ensemble members more generally.

… the members of the committee should not individually be chosen to have optimal trade-off between bias and variance, but should have relatively smaller bias, since the extra variance can be removed by averaging.

— Page 366, Neural Networks for Pattern Recognition, 1995.

Other approaches may involve selecting a random subspace of the input space to allocate to each model, such as a subset of the hyper-volume in the input space or a subset of input features.

Varying Models

Training the same under-constrained model on the same data with different initial conditions will result in different models given the difficulty of the problem, and the stochastic nature of the learning algorithm.

This is because the optimization problem that the network is trying to solve is so challenging that there are many “good” and “different” solutions to map inputs to outputs.

Most neural network algorithms achieve sub-optimal performance specifically due to the existence of an overwhelming number of sub-optimal local minima. If we take a set of neural networks which have converged to local minima and apply averaging we can construct an improved estimate. One way to understand this fact is to consider that, in general, networks which have fallen into different local minima will perform poorly in different regions of feature space and thus their error terms will not be strongly correlated.

— When networks disagree: Ensemble methods for hybrid neural networks, 1995.

This may result in a reduced variance, but may not dramatically improve generalization error. The errors made by the models may still be too highly correlated because the models all have learned similar mapping functions.

An alternative approach might be to vary the configuration of each ensemble model, such as using networks with different capacity (e.g. number of layers or nodes) or models trained under different conditions (e.g. learning rate or regularization).

The result may be an ensemble of models that have learned a more heterogeneous collection of mapping functions and in turn have a lower correlation in their predictions and prediction errors.

Differences in random initialization, random selection of minibatches, differences in hyperparameters, or different outcomes of non-deterministic implementations of neural networks are often enough to cause different members of the ensemble to make partially independent errors.

— Pages 257-258, Deep Learning, 2016.

Such an ensemble of differently configured models can be achieved through the normal process of developing the network and tuning its hyperparameters. Each model could be saved during this process and a subset of better models chosen to comprise the ensemble.

Slightly inferiorly trained networks are a free by-product of most tuning algorithms; it is desirable to use such extra copies even when their performance is significantly worse than the best performance found. Better performance yet can be achieved through careful planning for an ensemble classification by using the best available parameters and training different copies on different subsets of the available database.

— Neural Network Ensembles, 1990.

In cases where a single model may take weeks or months to train, another alternative may be to periodically save the best model during the training process, called snapshot or checkpoint models, then select ensemble members among the saved models. This provides the benefits of having multiple models trained on the same data, although collected during a single training run.

Snapshot Ensembling produces an ensemble of accurate and diverse models from a single training process. At the heart of Snapshot Ensembling is an optimization process which visits several local minima before converging to a final solution. We take model snapshots at these various minima, and average their predictions at test time.

— Snapshot Ensembles: Train 1, get M for free, 2017.

A variation on the Snapshot ensemble is to save models from a range of epochs, perhaps identified by reviewing learning curves of model performance on the train and validation datasets during training. Ensembles from such contiguous sequences of models are referred to as horizontal ensembles.

First, networks trained for a relatively stable range of epoch are selected. The predictions of the probability of each label are produced by standard classifiers [over] the selected epoch[s], and then averaged.

— Horizontal and vertical ensemble with deep representation for classification, 2013.

A further enhancement of the snapshot ensemble is to systematically vary the optimization procedure during training to force different solutions (i.e. sets of weights), the best of which can be saved to checkpoints. This might involve injecting an oscillating amount of noise over training epochs or oscillating the learning rate during training epochs. A variation of this approach called Stochastic Gradient Descent with Warm Restarts (SGDR) demonstrated faster learning and state-of-the-art results for standard photo classification tasks.

Our SGDR simulates warm restarts by scheduling the learning rate to achieve competitive results […] roughly two to four times faster. We also achieved new state-of-the-art results with SGDR, mainly by using even wider [models] and ensembles of snapshots from SGDR’s trajectory.

— SGDR: Stochastic Gradient Descent with Warm Restarts, 2016.

A benefit of very deep neural networks is that the intermediate hidden layers provide a learned representation of the low-resolution input data. The hidden layers can output their internal representations directly, and the output from one or more hidden layers from one very deep network can be used as input to a new classification model. This is perhaps most effective when the deep model is trained using an autoencoder model. This type of ensemble is referred to as a vertical ensemble.

This method ensembles a series of classifiers whose inputs are the representation of intermediate layers. A lower error rate is expected because these features seem diverse.

— Horizontal and vertical ensemble with deep representation for classification, 2013.

Varying Combinations

The simplest way to combine the predictions is to calculate the average of the predictions from the ensemble members.

This can be improved slightly by weighting the predictions from each model, where the weights are optimized using a hold-out validation dataset. This provides a weighted average ensemble that is sometimes called model blending.

… we might expect that some members of the committee will typically make better predictions than other members. We would therefore expect to be able to reduce the error still further if we give greater weight to some committee members than to others. Thus, we consider a generalized committee prediction given by a weighted combination of the predictions of the members …

— Page 367, Neural Networks for Pattern Recognition, 1995.

One further step in complexity involves using a new model to learn how to best combine the predictions from each ensemble member.

The model could be a simple linear model (e.g. much like the weighted average), but could be a sophisticated nonlinear method that also considers the specific input sample in addition to the predictions provided by each member. This general approach of learning a new model is called model stacking, or stacked generalization.

Stacked generalization works by deducing the biases of the generalizer(s) with respect to a provided learning set. This deduction proceeds by generalizing in a second space whose inputs are (for example) the guesses of the original generalizers when taught with part of the learning set and trying to guess the rest of it, and whose output is (for example) the correct guess. […] When used with a single generalizer, stacked generalization is a scheme for estimating (and then correcting for) the error of a generalizer which has been trained on a particular learning set and then asked a particular question.

— Stacked generalization, 1992.

There are more sophisticated methods for stacking models, such as boosting where ensemble members are added one at a time in order to correct the mistakes of prior models. The added complexity means this approach is less often used with large neural network models.

Another combination that is a little bit different is to combine the weights of multiple neural networks with the same structure. The weights of multiple networks can be averaged, to hopefully result in a new single model that has better overall performance than any original model. This approach is called model weight averaging.

… suggests it is promising to average these points in weight space, and use a network with these averaged weights, instead of forming an ensemble by averaging the outputs of networks in model space

— Averaging Weights Leads to Wider Optima and Better Generalization, 2018.

Summary of Ensemble Techniques

In summary, we can list some of the more common and interesting ensemble methods for neural networks organized by each element of the method that can be varied, as follows:

  • Varying Training Data
    • k-fold Cross-Validation Ensemble
    • Bootstrap Aggregation (bagging) Ensemble
    • Random Training Subset Ensemble
  • Varying Models
    • Multiple Training Run Ensemble
    • Hyperparameter Tuning Ensemble
    • Snapshot Ensemble
    • Horizontal Epochs Ensemble
    • Vertical Representational Ensemble
  • Varying Combinations
    • Model Averaging Ensemble
    • Weighted Average Ensemble
    • Stacked Generalization (stacking) Ensemble
    • Boosting Ensemble
    • Model Weight Averaging Ensemble

There is no single best ensemble method; perhaps experiment with a few approaches or let the constraints of your project guide you.

Further Reading

This section provides more resources on the topic if you are looking to go deeper.

Books

  • Section 9.6 Committees of networks, Neural Networks for Pattern Recognition, 1995.
  • Section 7.11 Bagging and Other Ensemble Methods, Deep Learning, 2016.
  • Section 7.3.3 Model ensembling, Deep Learning With Python, 2017.
  • Section 8.2 Bagging, Random Forests, Boosting, An Introduction to Statistical Learning with Applications in R, 2013.

Papers

  • Neural Network Ensembles, 1990.
  • Neural Network Ensembles, Cross Validation, and Active Learning, 1994.
  • When networks disagree: Ensemble methods for hybrid neural networks, 1995.
  • Snapshot Ensembles: Train 1, get M for free, 2017.
  • SGDR: Stochastic Gradient Descent with Warm Restarts, 2016.
  • Horizontal and vertical ensemble with deep representation for classification, 2013.
  • Stacked generalization, 1992.
  • Averaging Weights Leads to Wider Optima and Better Generalization, 2018.

Articles

  • Ensemble learning, Wikipedia.
  • Bootstrap aggregating, Wikipedia.
  • Boosting (machine learning), Wikipedia.

Summary

In this post, you discovered ensemble methods for deep learning neural networks to reduce variance and improve prediction performance.

Specifically, you learned:

  • Neural network models are nonlinear and have a high variance, which can be frustrating when preparing a final model for making predictions.
  • Ensemble learning combines the predictions from multiple neural network models to reduce the variance of predictions and reduce generalization error.
  • Techniques for ensemble learning can be grouped by the element that is varied, such as training data, the model, and how predictions are combined.

Do you have any questions?
Ask your questions in the comments below and I will do my best to answer. 

 

Android canvas drawBitmap方法详解及实例

Android canvas drawBitmap方法详解及实例

 Android canvas drawBitmap方法详解及实例

之前自己在自定义view,用到canvas.drawBitmap(Bitmap,SrcRect,DesRect,Paint)的时候,对其中的第2和3个参数的含义含糊不清。看源码函数也没理解,然后看了一些其他的博客加上自己的理解,整理如下。首先,我们看一张图片,今天就要绘制这张图片。

这里写图片描述

然后将图片用红色的线条分成4个部分,如下:


这里写图片描述

我们自定义一个View,代码如下:

public class PoterDuffLoadingView extends View {

 private Resources mResources;
 private Paint mBitPaint;
 private Bitmap mBitmap;

 private int mTotalWidth,mTotalHeight;
 private Bitmap girlBitmap;
 private int girlBitWidth,girlBitHeight;
 private Rect girlSrcRect,girlDesRect;

 public PoterDuffLoadingView(Context context) {
  super(context);
  mResources = getResources();
  initBitmap();
 }

 private void initBitmap() {
  //美女图片的宽和高
  girlBitmap = ((BitmapDrawable)mResources.getDrawable(R.drawable.a1)).getBitmap();
  girlBitWidth = girlBitmap.getWidth();
  girlBitHeight = girlBitmap.getHeight();


 }

 @Override
 protected void onDraw(Canvas canvas) {
  super.onDraw(canvas);
  canvas.drawBitmap(girlBitmap,girlSrcRect,girlDesRect,null);
 }
 @Override
 protected void onMeasure(int widthMeasureSpec,int heightMeasureSpec) {
  super.onMeasure(widthMeasureSpec,heightMeasureSpec);
 }

 @Override
 protected void onSizeChanged(int w,int h,int oldw,int oldh) {
  Log.d("xxxxxx","onSizeChanged,w = "+w+",h = "+h+",mBitWidth = "+mBitWidth+",mBitHeight = "+mBitHeight);
  super.onSizeChanged(w,h,oldw,oldh);
  mTotalWidth = w;
  mTotalHeight = h;

  girlSrcRect = new Rect(0,girlBitWidth,girlBitHeight);
  girlDesRect = new Rect(0,girlBitHeight);

 }
}

其中:

girlSrcRect = new Rect(0,girlBitHeight); 

表示要画的图就是整个图片的大小。我觉得girlSrcRect 就是指画图的大小范围。

girlDesRect :就是所画的图在View上的位置。表是距离左边和上边为0,右边(girlBitWidth - 0)就为图片的宽度,下边(girlBitHeight - 0)为图片的高度。

效果图如下 :


这里写图片描述

当我们想移动图片的位置,代码如下:

girlDesRect = new Rect(20,50,20+girlBitWidth,50+girlBitHeight);

表示图片的左边所在的位置为20个像素,图片的顶部在View上的位置为50个像素。图片右边在view上的位置为(20+girlBitWidth )即距离图片的左边的距离是:[(20+girlBitWidth )- 20]。底部一样的道理。

如图:


这里写图片描述

如果我们只想画上述图片的第2部分,该怎么做呢?我们只需要做如下修改:

girlSrcRect = new Rect(girlBitWidth/2,girlBitHeight/2);
  girlDesRect = new Rect(20,50+girlBitHeight);

效果图:

这里写图片描述


是不是就是显示图片的第二部分呢,细心的会发现这个图和之前整体的图片大小是一样的。原因是什么呢:

girlSrcRect :表示我们要画图的部分。

girlDesRect :表示我们要绘图的位置。从上面的girlDesRect 值看出,绘图的位置的确是整个图片的位置。

那我们只想要1/4图的大小呢,怎么办?

girlSrcRect = new Rect(girlBitWidth/2,20+girlBitWidth/2,50+girlBitHeight/2);

如下图:

这里写图片描述

好了,就到这里了,毕竟自己写的才回更深刻!

感谢阅读,希望能帮助到大家,谢谢大家对本站的支持!

android canvas 的drawLine

android canvas 的drawLine

canvas的drawLine可以畫出線,但是若是想畫出比較粗壯的線,該怎麼辦.似乎是沒有發現什麼好的方法

Android Canvas.drawText无法呈现

Android Canvas.drawText无法呈现

我有一个自定义视图,在屏幕上绘制几个不同的东西.每个人都有自己的油漆对象.除了文字之外,一切都很好.它在Gingerbread中运行得很好,但ICS没有文字.

这是我的绘制方法:

protected void onDraw(Canvas canvas)
{
   canvas.save(Canvas.MATRIX_SAVE_FLAG);
   canvas.scale(getWidth(),getHeight());

   drawGrid(canvas);
   drawHeader(canvas);
   drawSelected(canvas);
   drawDays(canvas);
   drawToday(canvas);

   canvas.restore();
}

网格,选定和今天工作正常.标题和天是文本绘图,它们不起作用.

这是drawHeader方法:

private void drawHeader(Canvas canvas)
{
   canvas.drawText("Sun",DAYS[0],.05f,paintDaysOfTheWeek);
   canvas.drawText("Mon",DAYS[1],paintDaysOfTheWeek);
   canvas.drawText("Tues",DAYS[2],paintDaysOfTheWeek);
   canvas.drawText("Wed",DAYS[3],paintDaysOfTheWeek);
   canvas.drawText("Thurs",DAYS[4],paintDaysOfTheWeek);
   canvas.drawText("Fri",DAYS[5],paintDaysOfTheWeek);
   canvas.drawText("Sat",DAYS[6],paintDaysOfTheWeek);

   canvas.drawLine(.01f,.0f,.99f,paintMediumBlack);
   canvas.drawLine(.01f,.07f,paintMediumBlack);
}

有任何想法吗?

解决方法

我通过添加修复了这个问题

if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.HONEYCOMB)
    setLayerType(LAYER_TYPE_SOFTWARE,paint)

到我的自定义视图.不知道为什么我需要它,如果有人能解释它,那将是伟大的.

关于Android Canvas Off Screen Drawing Performance的介绍现已完结,谢谢您的耐心阅读,如果想了解更多关于(转) Ensemble Methods for Deep Learning Neural Networks to Reduce Variance and Improve Performance、Android canvas drawBitmap方法详解及实例、android canvas 的drawLine、Android Canvas.drawText无法呈现的相关知识,请在本站寻找。

本文标签: