在这篇文章中,我们将带领您了解Pythonnumpy模块-cast()实例源码的全貌,包括pythoncasting的相关情况。同时,我们还将为您介绍有关Electron/Mongoose/Mongo
在这篇文章中,我们将带领您了解Python numpy 模块-cast() 实例源码的全貌,包括python casting的相关情况。同时,我们还将为您介绍有关Electron / Mongoose / MongoDB Cast 错误:“错误:有效负载验证失败:video_buffer:对于值“Uint8Array..”,Cast to Buffer 失败、herbetr 遇到 Cannot cast java.lang.Character to java.lang.Stringat java.lang.Class.cast、java.util.LinkedHashMap cannot be cast to xxx 和 net.sf.ezmorph.bean.MorphDynaBean cannot be cast ...、java中的类型安全问题-Type safety: Unchecked cast from Object to ... 或者 Type safety: Unchecked cast from T...的知识,以帮助您更好地理解这个主题。
本文目录一览:- Python numpy 模块-cast() 实例源码(python casting)
- Electron / Mongoose / MongoDB Cast 错误:“错误:有效负载验证失败:video_buffer:对于值“Uint8Array..”,Cast to Buffer 失败
- herbetr 遇到 Cannot cast java.lang.Character to java.lang.Stringat java.lang.Class.cast
- java.util.LinkedHashMap cannot be cast to xxx 和 net.sf.ezmorph.bean.MorphDynaBean cannot be cast ...
- java中的类型安全问题-Type safety: Unchecked cast from Object to ... 或者 Type safety: Unchecked cast from T...
Python numpy 模块-cast() 实例源码(python casting)
Python numpy 模块,cast() 实例源码
我们从Python开源项目中,提取了以下50个代码示例,用于说明如何使用numpy.cast()。
- def glorot_normal(shape, gain=1.0, c01b=False):
- orig_shape = shape
- if c01b:
- if len(shape) != 4:
- raise RuntimeError(
- "If c01b is True,only shapes of length 4 are accepted")
- n1, n2 = shape[0], shape[3]
- receptive_field_size = shape[1] * shape[2]
- else:
- if len(shape) < 2:
- shape = (1,) + tuple(shape)
- n1, n2 = shape[:2]
- receptive_field_size = np.prod(shape[2:])
- std = gain * np.sqrt(2.0 / ((n1 + n2) * receptive_field_size))
- return np.cast[floatX](
- get_rng().normal(0.0, std, size=orig_shape))
- def adamax_updates(params, cost, lr=0.001, mom1=0.9, mom2=0.999):
- updates = []
- grads = T.grad(cost, params)
- for p, g in zip(params, grads):
- mg = th.shared(np.cast[th.config.floatX](p.get_value() * 0.))
- v = th.shared(np.cast[th.config.floatX](p.get_value() * 0.))
- if mom1>0:
- v_t = mom1*v + (1. - mom1)*g
- updates.append((v,v_t))
- else:
- v_t = g
- mg_t = T.maximum(mom2*mg, abs(g))
- g_t = v_t / (mg_t + 1e-6)
- p_t = p - lr * g_t
- updates.append((mg, mg_t))
- updates.append((p, p_t))
- return updates
- def adam_updates(params, params)
- t = th.shared(np.cast[th.config.floatX](1.))
- for p, grads):
- v = th.shared(np.cast[th.config.floatX](p.get_value() * 0.))
- mg = th.shared(np.cast[th.config.floatX](p.get_value() * 0.))
- v_t = mom1*v + (1. - mom1)*g
- mg_t = mom2*mg + (1. - mom2)*T.square(g)
- v_hat = v_t / (1. - mom1 ** t)
- mg_hat = mg_t / (1. - mom2 ** t)
- g_t = v_hat / T.sqrt(mg_hat + 1e-8)
- p_t = p - lr * g_t
- updates.append((v, v_t))
- updates.append((mg, p_t))
- updates.append((t, t+1))
- return updates
- def adam_updates(params, t+1))
- return updates
- def get_output_for(self, input, deterministic=False, **kwargs):
- if deterministic:
- norm_features = (input-self.avg_batch_mean.dimshuffle(*self.dimshuffle_args)) / T.sqrt(1e-6 + self.avg_batch_var).dimshuffle(*self.dimshuffle_args)
- else:
- batch_mean = T.mean(input,axis=self.axes_to_sum).flatten()
- centered_input = input-batch_mean.dimshuffle(*self.dimshuffle_args)
- batch_var = T.mean(T.square(centered_input),axis=self.axes_to_sum).flatten()
- batch_stdv = T.sqrt(1e-6 + batch_var)
- norm_features = centered_input / batch_stdv.dimshuffle(*self.dimshuffle_args)
- # BN updates
- new_m = 0.9*self.avg_batch_mean + 0.1*batch_mean
- new_v = 0.9*self.avg_batch_var + T.cast((0.1*input.shape[0])/(input.shape[0]-1),th.config.floatX)*batch_var
- self.bn_updates = [(self.avg_batch_mean, new_m), (self.avg_batch_var, new_v)]
- if hasattr(self, ''g''):
- activation = norm_features*self.g.dimshuffle(*self.dimshuffle_args)
- else:
- activation = norm_features
- if hasattr(self, ''b''):
- activation += self.b.dimshuffle(*self.dimshuffle_args)
- return self.nonlinearity(activation)
- def get_output_for(self, ''b''):
- activation += self.b.dimshuffle(*self.dimshuffle_args)
- return self.nonlinearity(activation)
- def __call__(self, learning_rate):
- """Update the learning rate according to the exponential decay
- schedule.
- """
- if self._count == 0.:
- self._base_lr = learning_rate.get_vale()
- self._count += 1
- if not self._min_reached:
- new_lr = self._base_lr * (self.decay_factor ** (-self._count))
- if new_lr <= self.min_lr:
- self._min_reached = True
- new_lr = self._min_reached
- else:
- new_lr = self.min_lr
- learning_rate.set_value(np.cast[theano.config.floatX](new_lr))
- def as_floatX(variable):
- """
- This code is taken from pylearn2:
- Casts a given variable into dtype config.floatX
- numpy ndarrays will remain numpy ndarrays
- python floats will become 0-D ndarrays
- all other types will be treated as theano tensors
- """
- if isinstance(variable, float):
- return numpy.cast[theano.config.floatX](variable)
- if isinstance(variable, numpy.ndarray):
- return numpy.cast[theano.config.floatX](variable)
- return theano.tensor.cast(variable, theano.config.floatX)
- def as_floatX(variable):
- """
- This code is taken from pylearn2:
- Casts a given variable into dtype config.floatX
- numpy ndarrays will remain numpy ndarrays
- python floats will become 0-D ndarrays
- all other types will be treated as theano tensors
- """
- if isinstance(variable, theano.config.floatX)
- def parameter_prediction(self, test_set_x): #,batch_size
- """ This function is to predict the output of NN
- :param test_set_x: input features for a testing sentence
- :type test_set_x: python array variable
- :returns: predicted features
- """
- n_test_set_x = test_set_x.shape[0]
- test_out = theano.function([], self.final_layer.output,
- givens={self.x: test_set_x, self.is_train: np.cast[''int32''](0)}, on_unused_input=''ignore'')
- predict_parameter = test_out()
- return predict_parameter
- ## the function to output activations at a hidden layer
- def generate_hidden_layer(self, test_set_x, bn_layer_index):
- """ This function is to predict the bottleneck features of NN
- :param test_set_x: input features for a testing sentence
- :type test_set_x: python array variable
- :returns: predicted bottleneck features
- """
- n_test_set_x = test_set_x.shape[0]
- test_out = theano.function([], self.rnn_layers[bn_layer_index].output,
- givens={self.x: test_set_x, on_unused_input=''ignore'')
- predict_parameter = test_out()
- return predict_parameter
- def parameter_prediction(self,
- givens={self.x: test_set_x[0:n_test_set_x], on_unused_input=''ignore'')
- predict_parameter = test_out()
- return predict_parameter
- def parameter_prediction_S2S(self, test_set_d):
- """ This function is to predict the output of NN
- :param test_set_x: input features for a testing sentence
- :param test_set_d: phone durations for a testing sentence
- :type test_set_x: python array variable
- :type test_set_d: python array variable
- :returns: predicted features
- """
- n_test_set_x = test_set_x.shape[0]
- test_out = theano.function([],
- givens={self.x: test_set_x[0:n_test_set_x], self.d: test_set_d[0:n_test_set_x], on_unused_input=''ignore'')
- predict_parameter = test_out()
- return predict_parameter
- def generate_hidden_layer(self, on_unused_input=''ignore'')
- predict_parameter = test_out()
- return predict_parameter
- def get_training_data(num_samples):
- """Generates some training data."""
- # As (x,y) Cartesian coordinates.
- x = np.random.randint(0, 2, size=(num_samples, 2))
- y = x[:, 0] + 2 * x[:, 1] # 2-digit binary to integer.
- y = np.cast[''int32''](y)
- x = np.cast[''float32''](x) * 1.6 - 0.8 # Scales to [-1,1].
- x += np.random.uniform(-0.1, 0.1, size=x.shape)
- y_ohe = np.cast[''float32''](np.eye(4)[y])
- y = np.cast[''float32''](np.expand_dims(y, -1))
- return x, y, y_ohe
- def pcnn_norm(x, colorspace="RGB", reverse=False):
- """normalize the input from and to [-1,1].
- Args:
- x: input image array (3D or 4D)
- colorspace (str): Source/target colorspace,depending on the value of `reverse`
- reverse (bool,optional): If False,converts the input from the given colorspace to float in the range [-1,1].
- Otherwise,converts the input to the valid range for the given colorspace. Defaults to False.
- Returns:
- x_norm: normalized input
- """
- if colorspace == "RGB":
- return np.cast[np.uint8](x * 127.5 + 127.5) if reverse else np.cast[np.float32]((x - 127.5) / 127.5)
- elif colorspace == "lab":
- if x.shape[-1] == 1:
- return (x * 50. + 50.) if reverse else np.cast[np.float32]((x - 50.) / 50.)
- else:
- a = np.array([50., +0.5, -0.5], dtype=np.float32)
- b = np.array([50., 127.5, 127.5], dtype=np.float32)
- return np.cast[np.float64](x * b + a) if reverse else np.cast[np.float32]((x - a) / b)
- else:
- raise ValueError("UnkNown colorspace" % colorspace)
- def __init__(self, n_in, n_out, prob_drop=0.5, verbose=False):
- self.verbose = verbose
- self.prob_drop = prob_drop
- self.prob_keep = 1.0 - prob_drop
- self.flag_on = theano.shared(np.cast[theano.config.floatX](1.0))
- self.flag_off = 1.0 - self.flag_on
- seed_this = DropoutLayer.seed_common.randint(0, 2**31-1)
- mask_rng = theano.tensor.shared_randomstreams.RandomStreams(seed_this)
- self.mask = mask_rng.binomial(n=1, p=self.prob_keep, size=input.shape)
- self.output = \\
- self.flag_on * T.cast(self.mask, theano.config.floatX) * input + \\
- self.flag_off * self.prob_keep * input
- DropoutLayer.layers.append(self)
- if self.verbose:
- print ''dropout layer with P_drop: '' + str(self.prob_drop)
- def load_data(dataset):
- if dataset.split(''.'')[-1] == ''gz'':
- f = gzip.open(dataset, ''r'')
- else:
- f = open(dataset, ''r'')
- train_set, valid_set, test_set = pkl.load(f)
- f.close()
- def shared_dataset(data_xy, borrow=True):
- data_x, data_y = data_xy
- shared_x = theano.shared(
- np.asarray(data_x, dtype=theano.config.floatX),
- borrow=borrow)
- shared_y = theano.shared(
- np.asarray(data_y,
- borrow=borrow)
- return shared_x, T.cast(shared_y, ''int32'')
- train_set_x, train_set_y = shared_dataset(train_set)
- valid_set_x, valid_set_y = shared_dataset(valid_set)
- test_set_x, test_set_y = shared_dataset(test_set)
- return [(train_set_x, train_set_y),
- (valid_set_x, valid_set_y),
- (test_set_x, test_set_y )]
- def adam(loss, params, learning_rate, beta1=0.9, beta2=0.999, epsilon=1e-8):
- grads = T.grad(loss, params)
- updates = OrderedDict()
- t_prev = theano.shared(np.cast[theano.config.floatX](0))
- t = t_prev + 1
- a_t = learning_rate * T.sqrt(1-beta2**t)/(1-beta1**t)
- for param, grad in zip(params, grads):
- value = param.get_value(borrow=True)
- m_prev = theano.shared(
- np.zeros(value.shape, dtype=value.dtype),
- broadcastable=param.broadcastable)
- v_prev = theano.shared(
- np.zeros(value.shape,
- broadcastable=param.broadcastable)
- m_t = beta1 * m_prev + (1 - beta1) * grad
- v_t = beta2 * v_prev + (1 - beta2) * grad ** 2
- step = a_t * m_t / (T.sqrt(v_t) + epsilon)
- updates[m_prev] = m_t
- updates[v_prev] = v_t
- updates[param] = param - step
- updates[t_prev] = t
- return updates
- def get_output_for(self, ''b''):
- activation += self.b.dimshuffle(*self.dimshuffle_args)
- return self.nonlinearity(activation)
- def one_hot(labels, num_classes, name=''one_hot''):
- """Transform numeric labels into onehot_labels.
- Args:
- labels: [batch_size] target labels.
- num_classes: total number of classes.
- scope: Optional scope for op_scope.
- Returns:
- one hot encoding of the labels.
- """
- with tf.op_scope(name):
- batch_size = labels.get_shape()[0]
- indices = tf.expand_dims(tf.range(0, batch_size), 1)
- labels = tf.cast(tf.expand_dims(labels, 1), indices.dtype)
- concated = tf.concat(1, [indices, labels])
- onehot_labels = tf.sparse_to_dense(
- concated, tf.pack([batch_size, num_classes]), 1.0, 0.0)
- onehot_labels.set_shape([batch_size, num_classes])
- return onehot_labels
- def adam_updates(params, t+1))
- return updates
- def get_output_for(self, ''b''):
- activation += self.b.dimshuffle(*self.dimshuffle_args)
- return self.nonlinearity(activation)
- def adamax_updates(params, p_t))
- return updates
- def adam_updates(params, t+1))
- return updates
- def get_output_for(self,axis=self.axes_to_sum).flatten()
- batch_stdv = T.sqrt(1e-6 + batch_var)
- norm_features = centered_input / batch_stdv.dimshuffle(*self.dimshuffle_args)
- # BN updates
- new_m = 0.9*self.avg_batch_mean + 0.1*batch_mean
- new_v = 0.9*self.avg_batch_var + T.cast((0.1*input.shape[0])/(input.shape[0]-1.), th.config.floatX)*batch_var
- self.bn_updates = [(self.avg_batch_mean, ''b''):
- activation += self.b.dimshuffle(*self.dimshuffle_args)
- return self.nonlinearity(activation)
- def adam_updates(params, t+1))
- return updates
- def adam_conditional_updates(params, mincost, mom2=0.999): # if cost is less than mincost,don''t do update
- updates = []
- grads = T.grad(cost, ifelse(cost<mincost,v,v_t)))
- updates.append((mg,mg,mg_t)))
- updates.append((p,p,p_t)))
- updates.append((t,t,t+1)))
- return updates
- def get_output_for(self, ''b''):
- activation += self.b.dimshuffle(*self.dimshuffle_args)
- if self.nonlinearity is not None:
- return self.nonlinearity(activation)
- else:
- return activation
- def shared_dataset(data_xy, borrow=True):
- """ Function that loads the dataset into shared variables
- The reason we store our dataset in shared variables is to allow
- Theano to copy it into the GPU memory (when code is run on GPU).
- Since copying data into the GPU is slow,copying a minibatch everytime
- is needed (the default behavIoUr if the data is not in a shared
- variable) would lead to a large decrease in performance.
- """
- data_x, data_y = data_xy
- shared_x = theano.shared(np.asarray(data_x,
- dtype=theano.config.floatX),
- borrow=borrow)
- shared_y = theano.shared(np.asarray(data_y,
- borrow=borrow)
- return shared_x, ''int32'')
- def shared_dataset(data_xy, ''int32'')
- def parameter_prediction(self, on_unused_input=''ignore'')
- predict_parameter = test_out()
- return predict_parameter
- def parameter_prediction_S2S(self, on_unused_input=''ignore'')
- predict_parameter = test_out()
- return predict_parameter
- def parameter_prediction(self, on_unused_input=''ignore'')
- predict_parameter = test_out()
- return predict_parameter
- def __init__(self, prob_drop=0.5):
- self.prob_drop = prob_drop
- self.prob_keep = 1.0 - prob_drop
- self.flag_on = theano.shared(np.cast[theano.config.floatX](1.0))
- self.flag_off = 1.0 - self.flag_on # 1 during test
- seed_this = DropoutLayer.seed_common.randint(0, theano.config.floatX) * input + \\
- self.flag_off * self.prob_keep * input
- DropoutLayer.layers.append(self)
- print ''dropout layer with P_drop: '' + str(self.prob_drop)
- def parameter_prediction(self, on_unused_input=''ignore'')
- predict_parameter = test_out()
- return predict_parameter
- def parameter_prediction_S2S(self, on_unused_input=''ignore'')
- predict_parameter = test_out()
- return predict_parameter
- def parameter_prediction(self, on_unused_input=''ignore'')
- predict_parameter = test_out()
- return predict_parameter
- def categorical_accuracy(y_pred, y_true, top_k=1, reduction=tf.reduce_mean,
- name="CategoricalAccuracy"):
- """ Non-differentiable """
- with tf.variable_scope(name):
- if y_true.get_shape().ndims == y_pred.get_shape().ndims:
- y_true = tf.argmax(y_true, axis=-1)
- elif y_true.get_shape().ndims != y_pred.get_shape().ndims - 1:
- raise TypeError(''rank mismatch between y_true and y_pred'')
- if top_k == 1:
- # standard categorical accuracy
- top = tf.argmax(y_pred, axis=-1)
- y_true = tf.cast(y_true, top.dtype.base_dtype)
- match_values = tf.equal(top, y_true)
- else:
- match_values = tf.nn.in_top_k(y_pred, tf.cast(y_true, ''int32''),
- k=top_k)
- match_values = tf.cast(match_values, dtype=''float32'')
- return reduction(match_values)
- def to_llr(x, name="LogLikelihoodratio"):
- '''''' Convert a matrix of probabilities into log-likelihood ratio
- :math:`LLR = log(\\\\frac{prob(data|target)}{prob(data|non-target)})`
- ''''''
- if not is_tensor(x):
- x /= np.sum(x, axis=-1, keepdims=True)
- x = np.clip(x, 10e-8, 1. - 10e-8)
- return np.log(x / (np.cast(1., x.dtype) - x))
- else:
- with tf.variable_scope(name):
- x /= tf.reduce_sum(x, keepdims=True)
- x = tf.clip_by_value(x, 1. - 10e-8)
- return tf.log(x / (tf.cast(1., x.dtype.base_dtype) - x))
- # ===========================================================================
- # Speech task metrics
- # ===========================================================================
- def glorot_uniform(shape, n2 = shape[:2]
- receptive_field_size = np.prod(shape[2:])
- std = gain * np.sqrt(2.0 / ((n1 + n2) * receptive_field_size))
- a = 0.0 - np.sqrt(3) * std
- b = 0.0 + np.sqrt(3) * std
- return np.cast[floatX](
- get_rng().uniform(low=a, high=b, size=orig_shape))
- def he_normal(shape, c01b=False):
- if gain == ''relu'':
- gain = np.sqrt(2)
- if c01b:
- if len(shape) != 4:
- raise RuntimeError(
- "If c01b is True,only shapes of length 4 are accepted")
- fan_in = np.prod(shape[:3])
- else:
- if len(shape) <= 2:
- fan_in = shape[0]
- elif len(shape) > 2:
- fan_in = np.prod(shape[1:])
- std = gain * np.sqrt(1.0 / fan_in)
- return np.cast[floatX](
- get_rng().normal(0.0, size=shape))
- def orthogonal(shape, gain=1.0):
- if gain == ''relu'':
- gain = np.sqrt(2)
- if len(shape) < 2:
- raise RuntimeError("Only shapes of length 2 or more are supported,but "
- "given shape:%s" % str(shape))
- flat_shape = (shape[0], np.prod(shape[1:]))
- a = get_rng().normal(0.0, flat_shape)
- u, _, v = np.linalg.svd(a, full_matrices=False)
- # pick the one with the correct shape
- q = u if u.shape == flat_shape else v
- q = q.reshape(shape)
- return np.cast[floatX](gain * q)
- # ===========================================================================
- # Fast initialization
- # ===========================================================================
- def adam_updates(params, t+1))
- return updates
- def get_output_for(self, set_bn_updates=True,axis=self.axes_to_sum).flatten()
- batch_stdv = T.sqrt(1e-6 + batch_var)
- norm_features = centered_input / batch_stdv.dimshuffle(*self.dimshuffle_args)
- # BN updates
- if set_bn_updates:
- new_m = 0.9*self.avg_batch_mean + 0.1*batch_mean
- new_v = 0.9*self.avg_batch_var + T.cast((0.1*input.shape[0])/(input.shape[0]-1),th.config.floatX)*batch_var
- self.bn_updates = [(self.avg_batch_mean, ''b''):
- activation += self.b.dimshuffle(*self.dimshuffle_args)
- return self.nonlinearity(activation)
- def shared_dataset(data_xy, ''int32'')
- def shared_dataset(data_xy, ''int32'')
- def __init__(self,
- init_momentum,
- averaging_coeff=0.95,
- stabilizer=1e-2,
- use_first_order=False,
- bound_inc=False,
- momentum_clipping=None):
- init_momentum = float(init_momentum)
- assert init_momentum >= 0.
- assert init_momentum <= 1.
- averaging_coeff = float(averaging_coeff)
- assert averaging_coeff >= 0.
- assert averaging_coeff <= 1.
- stabilizer = float(stabilizer)
- assert stabilizer >= 0.
- self.__dict__.update(locals())
- del self.self
- self.momentum = sharedX(self.init_momentum)
- self.momentum_clipping = momentum_clipping
- if momentum_clipping is not None:
- self.momentum_clipping = np.cast[config.floatX](momentum_clipping)
- def __init__(self,
- init_momentum=0.9,
- averaging_coeff=0.99,
- stabilizer=1e-4,
- update_param_norm_ratio=0.003,
- gradient_clipping=None):
- init_momentum = float(init_momentum)
- assert init_momentum >= 0.
- assert init_momentum <= 1.
- averaging_coeff = float(averaging_coeff)
- assert averaging_coeff >= 0.
- assert averaging_coeff <= 1.
- stabilizer = float(stabilizer)
- assert stabilizer >= 0.
- self.__dict__.update(locals())
- del self.self
- self.momentum = sharedX(self.init_momentum)
- self.update_param_norm_ratio = update_param_norm_ratio
- self.gradient_clipping = gradient_clipping
- if gradient_clipping is not None:
- self.gradient_clipping = np.cast[config.floatX](gradient_clipping)
- def as_floatX(variable):
- """
- This code is taken from pylearn2:
- Casts a given variable into dtype config.floatX
- numpy ndarrays will remain numpy ndarrays
- python floats will become 0-D ndarrays
- all other types will be treated as theano tensors
- """
- if isinstance(variable, theano.config.floatX)
Electron / Mongoose / MongoDB Cast 错误:“错误:有效负载验证失败:video_buffer:对于值“Uint8Array..”,Cast to Buffer 失败
如何解决Electron / Mongoose / MongoDB Cast 错误:“错误:有效负载验证失败:video_buffer:对于值“Uint8Array..”,Cast to Buffer 失败
请帮忙!!!
我正在尝试使用 Mongoose 将视频剪辑的一个小 (
这是我的架构:
const newPayloadSchema = new Schema({
video_buffer: Buffer,use_case: String,time_stamp: Number,})
module.exports = model(''Payload'',newPayloadSchema);
我创建要保存的对象是:
const payload = {
video_buffer: buffer,use_case: "vid_clips",time_stamp: Date.Now()
}
console.log
of payload.video_buffer 产生:-
{
[electron] video_buffer: Uint8Array(1946814) [
[electron] 26,69,223,163,159,66,134,129,1,247,[electron] 1,242,4,243,8,130,132,... 1946714 more items
[electron] ]}
正在保存...
const newPayload = new Payload(payload);
const PayloadSaved = await newPayload.save();
我收到此错误:
video_buffer: CastError: Cast to Buffer Failed for value "Uint8Array(1946814) [
[electron] 26,[electron] 1,[electron] 119,101,98,109,135,133,2,[electron] 24,83,128,103,255,[electron] 21,73,169,102,153,42,215,177,131,15,64,[electron] 77,67,104,114,111,87,65,[electron] 67,22,84,174,107,171,[electron] 169,115,197,203,9,28,[electron] 246,[electron] ... 1946714 more items
[electron] ]" at path "video_buffer"
我已经检查了 Schema 类型以及 Mongoose 文档所建议的方式。
我做错了什么???就是看不懂。
任何帮助将不胜感激!
herbetr 遇到 Cannot cast java.lang.Character to java.lang.Stringat java.lang.Class.cast
sql.append("order by a.T_DATA_DATE desc,a.QUES_SUM desc");
Query qu= HibernateUtil.currentSession().createSQLQuery(sql.toString())
.addScalar("ruleCode", Hibernate.STRING)
.addScalar("ruleId", Hibernate.STRING)
.addScalar("schdType", Hibernate.STRING)
.addScalar("chkDate", Hibernate.DATE)
.addScalar("queType", Hibernate.STRING)
.addScalar("chkResult", Hibernate.STRING)
.addScalar("ruleName", Hibernate.STRING)
.addScalar("selSum", Hibernate.INTEGER)
.addScalar("quesSum", Hibernate.INTEGER)
.addScalar("quesState", Hibernate.STRING)
.addScalar("gBatch", Hibernate.STRING)
.addScalar("gId", Hibernate.STRING)
.addScalar("dataDate", Hibernate.STRING)
.addScalar("quesId", Hibernate.STRING)
.addScalar("bigClass", Hibernate.STRING)
.addScalar("smallClass", Hibernate.STRING)
.setCacheable(false);
java.util.LinkedHashMap cannot be cast to xxx 和 net.sf.ezmorph.bean.MorphDynaBean cannot be cast ...
java.util.LinkedHashMap cannot be cast to com.entity.Person
使用mybatis, resultMap映射的是实体类Person, 查询出来的结果是一个ArrayList<Person>,然后结果存放在一个ListObject的data属性中,
存放结果的类
public class ListObject {
private Object data;
public Object getData() {
return data;
}
public void setData(Object data) {
this.data = data;
}
}
强制转换成List<Person> result = (List<Person>)result.getData();没有报错, 也拿到了数据,当使用for循环的时候报错 java.util.LinkedHashMap cannot be cast to com.entity.Person
ListObject result = method.query(name);
List<Person> result = (List<Person>)result.getData();
for(Person per : result){
sourceList.add(per .getId());
}
解决方法:
导入 net.sf.json 类,使用JSONObject中的方法, 先将数据转成json字符串, 在转成实体对象
ListObject result = method.query(name);
List<Person> result = (List<Person>)result.getData();
for(Object obj : result){
JSONObject jsonObject=JSONObject.fromObject(objectStr);
Person per = (Person)JSONObject.toBean(jsonObject, Person.class);
sourceList.add(per.getId());
}
主要就是两步
JSONObject jsonObject=JSONObject.fromObject(objectStr); // 将数据转成json字符串
Person per = (Person)JSONObject.toBean(jsonObject, Person.class); //将json转成需要的对象
net.sf.ezmorph.bean.MorphDynaBean cannot be cast to xxx
当需要转换的json中包含有集合的时候, 需要先建一个map,将需要转换的对象中的集合中的对象放进map, 然后使用JSONObject.toBean(jsonObject,Person.class,maps);进行转换.
具体操作:
teacher类中有一个List<Student> stu, 当需要转化teacher时, 就需要多写一个map对象
import java.util.List;
/**
* @author xukai
*
*/
public class Teacher {
private String teaId;
private String teaName;
private List<Student> stu;
public Teacher() {
}
//getter setter
}
JSONObject jsonObectj = JSONObject.fromObject(teacher);
Map<String, Class> map = new HashMap<String,Class>();
map.put("stu", Student.class); // key为teacher私有变量的属性名 如果有多个集合需要转换, 写多个map.put()即可
Teacher teacherBean = (Teacher) JSONObject.toBean(jsonObectj, Teacher.class, map);
主要就是在原先的方法基础上, 加一个map, 用来存集合转换的类型
java中的类型安全问题-Type safety: Unchecked cast from Object to ... 或者 Type safety: Unchecked cast from T...
首先,java语言室类型安全的,通常我们遇到这个问题是出现在 Object转化为目标类型 或者 Type转化为目标类型 时,
这个转化并不是安全的。这个问题普遍认为:因为使用了jdk1.5或者1.6的泛型,
request.getAttribute("***"); 得到的是一个默认为 Object的类型,当把他们转成 List<***> 时,或者
编译器认为有可能会出错,所以提示这个类型安全。
但是具体如何解除这个警告呢,以下是大家普遍用的取消警告的方法(注意:危险并没有真正解除)
一:方法上添加 @SuppressWarnings("unchecked")
二:Eclipse的 Window --> Preferences --> Java- --> Compiler --> Errors/Warning --> Generic types 中 Unchecked generic type operation 设置为 Ignore。
三:Eclipse的 Window --> Preferences --> Java --> Compiler 将 Compiler compliance level 设置为小于1.5
关于Python numpy 模块-cast() 实例源码和python casting的介绍现已完结,谢谢您的耐心阅读,如果想了解更多关于Electron / Mongoose / MongoDB Cast 错误:“错误:有效负载验证失败:video_buffer:对于值“Uint8Array..”,Cast to Buffer 失败、herbetr 遇到 Cannot cast java.lang.Character to java.lang.Stringat java.lang.Class.cast、java.util.LinkedHashMap cannot be cast to xxx 和 net.sf.ezmorph.bean.MorphDynaBean cannot be cast ...、java中的类型安全问题-Type safety: Unchecked cast from Object to ... 或者 Type safety: Unchecked cast from T...的相关知识,请在本站寻找。
本文标签: