GVKun编程网logo

Python numpy 模块-integer() 实例源码(python中numpy模块)

7

如果您对Pythonnumpy模块-integer()实例源码感兴趣,那么本文将是一篇不错的选择,我们将为您详在本文中,您将会了解到关于Pythonnumpy模块-integer()实例源码的详细内容

如果您对Python numpy 模块-integer() 实例源码感兴趣,那么本文将是一篇不错的选择,我们将为您详在本文中,您将会了解到关于Python numpy 模块-integer() 实例源码的详细内容,我们还将为您解答python中numpy模块的相关问题,并且为您提供关于(Integer)100 == (Integer)100和(Integer)1000 == (Integer)1000 的结果为什么不一样、ArrayList 的区别 a = new ArrayList<>();和 List a = new ArrayList<>();、Integer a = 1; Integer b = 1;、Integer a= 127 与 Integer b = 128相关的有价值信息。

本文目录一览:

Python numpy 模块-integer() 实例源码(python中numpy模块)

Python numpy 模块-integer() 实例源码(python中numpy模块)

Python numpy 模块,integer() 实例源码

我们从Python开源项目中,提取了以下50个代码示例,用于说明如何使用numpy.integer()

项目:mpnum    作者:dseuss    | 项目源码 | 文件源码
  1. def pack_samples(self, samples, dtype=None):
  2. """Pack samples into one integer per sample
  3.  
  4. Store one sample in a single integer instead of a list of
  5. integers with length `len(self.nsoutdims)`. Example:
  6.  
  7. >>> p = pauli_mpp(nr_sites=2,local_dim=2)
  8. >>> p.outdims
  9. (6,6)
  10. >>> p.pack_samples(np.array([[0,1],[1,0],2],[5,5]]))
  11. array([ 1,6,8,35])
  12.  
  13. """
  14. assert samples.ndim == 2
  15. assert samples.shape[1] == len(self.nsoutdims)
  16. samples = np.ravel_multi_index(samples.T, self.nsoutdims)
  17. if dtype not in (True, False, None) and issubclass(dtype, np.integer):
  18. info = np.iinfo(dtype)
  19. assert samples.min() >= info.min
  20. assert samples.max() <= info.max
  21. samples = samples.astype(dtype)
  22. return samples
项目:RFR-solution    作者:baoblackcoal    | 项目源码 | 文件源码
  1. def __init__(self, config, model_dir, ob_shape_list):
  2. self.model_dir = model_dir
  3.  
  4. self.cnn_format = config.cnn_format
  5. self.memory_size = config.memory_size
  6. self.actions = np.empty(self.memory_size, dtype = np.uint8)
  7. self.rewards = np.empty(self.memory_size, dtype = np.integer)
  8. # print(self.memory_size,config.screen_height,config.screen_width)
  9. # self.screens = np.empty((self.memory_size,config.screen_width),dtype = np.float16)
  10. self.screens = np.empty([self.memory_size] + ob_shape_list, dtype = np.float16)
  11. self.terminals = np.empty(self.memory_size, dtype = np.bool)
  12. self.history_length = config.history_length
  13. # self.dims = (config.screen_height,config.screen_width)
  14. self.dims = tuple(ob_shape_list)
  15. self.batch_size = config.batch_size
  16. self.count = 0
  17. self.current = 0
  18.  
  19. # pre-allocate prestates and poststates for minibatch
  20. self.prestates = np.empty((self.batch_size, self.history_length) + self.dims, dtype = np.float16)
  21. self.poststates = np.empty((self.batch_size, dtype = np.float16)
  22. # self.prestates = np.empty((self.batch_size,self.history_length,self.dims),dtype = np.float16)
  23. # self.poststates = np.empty((self.batch_size,dtype = np.float16)
项目:radar    作者:amoose136    | 项目源码 | 文件源码
  1. def test_auto_dtype_largeint(self):
  2. # Regression test for numpy/numpy#5635 whereby large integers Could
  3. # cause OverflowErrors.
  4.  
  5. # Test the automatic deFinition of the output dtype
  6. #
  7. # 2**66 = 73786976294838206464 => should convert to float
  8. # 2**34 = 17179869184 => should convert to int64
  9. # 2**10 = 1024 => should convert to int (int32 on 32-bit systems,
  10. # int64 on 64-bit systems)
  11.  
  12. data = TextIO(''73786976294838206464 17179869184 1024'')
  13.  
  14. test = np.ndfromtxt(data, dtype=None)
  15.  
  16. assert_equal(test.dtype.names, [''f0'', ''f1'', ''f2''])
  17.  
  18. assert_(test.dtype[''f0''] == np.float)
  19. assert_(test.dtype[''f1''] == np.int64)
  20. assert_(test.dtype[''f2''] == np.integer)
  21.  
  22. assert_allclose(test[''f0''], 73786976294838206464.)
  23. assert_equal(test[''f1''], 17179869184)
  24. assert_equal(test[''f2''], 1024)
项目:radar    作者:amoose136    | 项目源码 | 文件源码
  1. def test_with_incorrect_minlength(self):
  2. x = np.array([], dtype=int)
  3. assert_raises_regex(TypeError, "an integer is required",
  4. lambda: np.bincount(x, minlength="foobar"))
  5. assert_raises_regex(ValueError, "must be positive", minlength=-1))
  6. assert_raises_regex(ValueError, minlength=0))
  7.  
  8. x = np.arange(5)
  9. assert_raises_regex(TypeError, "minlength must be positive", minlength=0))
项目:radar    作者:amoose136    | 项目源码 | 文件源码
  1. def test_allclose(self):
  2. # Tests allclose on arrays
  3. a = np.random.rand(10)
  4. b = a + np.random.rand(10) * 1e-8
  5. self.assertTrue(allclose(a, b))
  6. # Test allclose w/ infs
  7. a[0] = np.inf
  8. self.assertTrue(not allclose(a, b))
  9. b[0] = np.inf
  10. self.assertTrue(allclose(a, b))
  11. # Test allclose w/ masked
  12. a = masked_array(a)
  13. a[-1] = masked
  14. self.assertTrue(allclose(a, b, masked_equal=True))
  15. self.assertTrue(not allclose(a, masked_equal=False))
  16. # Test comparison w/ scalar
  17. a *= 1e-8
  18. a[0] = 0
  19. self.assertTrue(allclose(a, 0, masked_equal=True))
  20.  
  21. # Test that the function works for MIN_INT integer typed arrays
  22. a = masked_array([np.iinfo(np.int_).min], dtype=np.int_)
  23. self.assertTrue(allclose(a, a))
项目:aioinflux    作者:plugaai    | 项目源码 | 文件源码
  1. def _parse_fields(point):
  2. output = []
  3. for k, v in point[''fields''].items():
  4. k = escape(k, key_escape)
  5. # noinspection PyUnresolvedReferences
  6. if isinstance(v, bool):
  7. output.append(''{k}={v}''.format(k=k, v=str(v).upper()))
  8. elif isinstance(v, (int, np.integer)):
  9. output.append(''{k}={v}i''.format(k=k, v=v))
  10. elif isinstance(v, str):
  11. output.append(''{k}="{v}"''.format(k=k, v=v.translate(str_escape)))
  12. elif v is None or np.isnan(v):
  13. continue
  14. else:
  15. # Floats and other numerical formats go here.
  16. # Todo: Add unit test
  17. output.append(''{k}={v}''.format(k=k, v=v))
  18. return '',''.join(output)
项目:diluvian    作者:aschampion    | 项目源码 | 文件源码
  1. def get_subvolume(self, bounds):
  2. if bounds.start is None or bounds.stop is None:
  3. image_subvol = self.image_data
  4. label_subvol = self.label_data
  5. else:
  6. image_subvol = self.image_data[
  7. bounds.start[0]:bounds.stop[0],
  8. bounds.start[1]:bounds.stop[1],
  9. bounds.start[2]:bounds.stop[2]]
  10. label_subvol = None
  11.  
  12. if np.issubdtype(image_subvol.dtype, np.integer):
  13. raise ValueError(''Sparse volume access does not support image data coercion.'')
  14.  
  15. seed = bounds.seed
  16. if seed is None:
  17. seed = np.array(image_subvol.shape, dtype=np.int64) // 2
  18.  
  19. return Subvolume(image_subvol, label_subvol, seed, bounds.label_id)
项目:importance-sampling    作者:idiap    | 项目源码 | 文件源码
  1. def __init__(self, X_train, y_train, X_test, y_test, categorical=True):
  2. self._x_train = X_train
  3. self._x_test = X_test
  4.  
  5. # are the targets to be made one hot vectors
  6. if categorical:
  7. self._y_train = np_utils.to_categorical(y_train)
  8. self._y_test = np_utils.to_categorical(y_test)
  9. self._output_size = self._y_train.shape[1]
  10.  
  11. # handle sparse output classification
  12. elif issubclass(y_train.dtype.type, np.integer):
  13. self._y_train = y_train
  14. self._y_test = y_test
  15. self._output_size = self._y_train.max() + 1 # assume 0 based indexes
  16.  
  17. # not classification,just copy them
  18. else:
  19. self._y_train = y_train
  20. self._y_test = y_test
  21. self._output_size = self._y_train.shape[1]
项目:importance-sampling    作者:idiap    | 项目源码 | 文件源码
  1. def __init__(self,just copy them
  2. else:
  3. self._y_train = y_train
  4. self._y_test = y_test
  5. self._output_size = self._y_train.shape[1]
项目:importance-sampling    作者:idiap    | 项目源码 | 文件源码
  1. def __init__(self,just copy them
  2. else:
  3. self._y_train = y_train
  4. self._y_test = y_test
  5. self._output_size = self._y_train.shape[1]
项目:importance-sampling    作者:idiap    | 项目源码 | 文件源码
  1. def __init__(self,just copy them
  2. else:
  3. self._y_train = y_train
  4. self._y_test = y_test
  5. self._output_size = self._y_train.shape[1]
项目:krpcScripts    作者:jwvanderbeck    | 项目源码 | 文件源码
  1. def test_auto_dtype_largeint(self):
  2. # Regression test for numpy/numpy#5635 whereby large integers Could
  3. # cause OverflowErrors.
  4.  
  5. # Test the automatic deFinition of the output dtype
  6. #
  7. # 2**66 = 73786976294838206464 => should convert to float
  8. # 2**34 = 17179869184 => should convert to int64
  9. # 2**10 = 1024 => should convert to int (int32 on 32-bit systems, 1024)
项目:krpcScripts    作者:jwvanderbeck    | 项目源码 | 文件源码
  1. def test_with_incorrect_minlength(self):
  2. x = np.array([], minlength=0))
项目:krpcScripts    作者:jwvanderbeck    | 项目源码 | 文件源码
  1. def test_allclose(self):
  2. # Tests allclose on arrays
  3. a = np.random.rand(10)
  4. b = a + np.random.rand(10) * 1e-8
  5. self.assertTrue(allclose(a, a))
项目:imgProcessor    作者:radjkarl    | 项目源码 | 文件源码
  1. def _changeArrayDType(img, dtype, **kwargs):
  2. if dtype == ''noUint'':
  3. return toNoUintArray(img)
  4. if issubclass(np.dtype(dtype).type, np.integer):
  5. return toUIntArray(img, **kwargs)
  6. return img.astype(dtype)
  7.  
  8.  
  9. # def bitDepth(path,img=None):
  10. # ''''''
  11. # there are no python filetypes between 8bit and 16 bit
  12. # so,to find out whether an image is 12 or 14 bit resolved
  13. # we need to check actual file size and image shape
  14. # ''''''
  15. # if img is None:
  16. # img = imread(img)
  17. # size = os.path.getsize(path)*8
  18. # print (size,img.size,8888888,img.shape,size/img.size)
  19. # kh
  20. # return size/img.size
项目:slither.ml    作者:MadcowD    | 项目源码 | 文件源码
  1. def __init__(self, model_dir):
  2. self.model_dir = model_dir
  3.  
  4. self.cnn_format = config.cnn_format
  5. self.memory_size = config.memory_size
  6. self.actions = np.empty(self.memory_size, dtype = np.integer)
  7. self.screens = np.empty((self.memory_size, config.screen_height, config.screen_width), dtype = np.bool)
  8. self.history_length = config.history_length
  9. self.dims = (config.screen_height, config.screen_width)
  10. self.batch_size = config.batch_size
  11. self.count = 0
  12. self.current = 0
  13.  
  14. # pre-allocate prestates and poststates for minibatch
  15. self.prestates = np.empty((self.batch_size, dtype = np.float16)
项目:data_tools    作者:veugene    | 项目源码 | 文件源码
  1. def _get_block(self, values, key_remainder=None):
  2. item_block = None
  3. for i, v in enumerate(values):
  4. # Lists in the aggregate key index in tandem;
  5. # so,index into those lists (the first list is `values`)
  6. v_key_remainder = key_remainder
  7. if isinstance(values, tuple) or isinstance(values, list):
  8. if key_remainder is not None:
  9. broadcasted_key_remainder = ()
  10. for k in key_remainder:
  11. if hasattr(k, ''__len__'') and len(k)==np.size(k):
  12. broadcasted_key_remainder += (k[i],)
  13. else:
  14. broadcasted_key_remainder += (k,)
  15. v_key_remainder = broadcasted_key_remainder
  16.  
  17. # Make a single read at an integer index of axis 0
  18. elem = self._get_element(v, v_key_remainder)
  19. if item_block is None:
  20. item_block = np.zeros((len(values),)+elem.shape,
  21. self.dtype)
  22. item_block[i] = elem
  23. return item_block
项目:CRIkit2    作者:CoherentRamanNIST    | 项目源码 | 文件源码
  1. def fcn(self, data_in):
  2. """
  3. If return list,[0] goes to original,[1] goes to affected
  4. """
  5. inst_nrb_merge = _MergeNRBs(nrb_left=self.nrb_left,
  6. nrb_right=self.nrb_right,
  7. pix=self.parameters[''pix_switchpt''],
  8. left_side_scale=self.parameters[''scale_left''])
  9.  
  10. if self.fullRange:
  11. pix = _np.arange(self.wn.size, dtype=_np.integer)
  12.  
  13. else:
  14. list_rng_pix = _find_nearest(self.wn, self.rng)[1]
  15. pix = _np.arange(list_rng_pix[0],list_rng_pix[1]+1,
  16. dtype=_np.integer)
  17.  
  18. nrb_merged = inst_nrb_merge.calculate()
  19. kkd = _np.zeros(data_in.shape)
  20.  
  21. # Note: kk_widget.fcn return imag part
  22. kkd[..., pix] = self.kk_widget.fcn([nrb_merged[pix], data_in[..., pix]])
  23.  
  24. return [_np.vstack((self.nrb_left, self.nrb_right, nrb_merged)),
  25. kkd]
项目:DQN    作者:boluoweifenda    | 项目源码 | 文件源码
  1. def __init__(self, path, size ,historySize, dims , batchSize):
  2.  
  3. self.size = size
  4. self.dims = dims
  5. # preallocate memory
  6. self.actions = np.empty(self.size, dtype=np.uint8)
  7. self.rewards = np.empty(self.size, dtype=np.integer)
  8. self.screens = np.empty((self.size, self.dims[0], self.dims[1] ), dtype=np.uint8)
  9. self.terminals = np.empty(self.size, dtype=np.bool)
  10.  
  11.  
  12.  
  13. self.history_length = historySize
  14. self.batch_size = batchSize
  15.  
  16. self.buffer = np.zeros((self.batch_size, dtype=np.uint8)
  17.  
  18. self.count = 0
  19. self.current = 0
  20.  
  21. # pre-allocate prestates and poststates for minibatch
  22. self.prestates = np.empty([self.batch_size, self.history_length] + self.dims, dtype=np.uint8)
  23. self.poststates = np.empty([self.batch_size, dtype=np.uint8)
项目:mriqc    作者:poldracklab    | 项目源码 | 文件源码
  1. def _prepare_mask(mask, label, erode=True):
  2. fgmask = mask.copy()
  3.  
  4. if np.issubdtype(fgmask.dtype, np.integer):
  5. if isinstance(label, string_types):
  6. label = FSL_FAST_LABELS[label]
  7.  
  8. fgmask[fgmask != label] = 0
  9. fgmask[fgmask == label] = 1
  10. else:
  11. fgmask[fgmask > .95] = 1.
  12. fgmask[fgmask < 1.] = 0
  13.  
  14. if erode:
  15. # Create a structural element to be used in an opening operation.
  16. struc = nd.generate_binary_structure(3, 2)
  17. # Perform an opening operation on the background data.
  18. fgmask = nd.binary_opening(fgmask, structure=struc).astype(np.uint8)
  19.  
  20. return fgmask
项目:alphacsc    作者:alphacsc    | 项目源码 | 文件源码
  1. def check_random_state(seed):
  2. """Turn seed into a np.random.RandomState instance.
  3.  
  4. If seed is None,return the RandomState singleton used by np.random.
  5. If seed is an int,return a new RandomState instance seeded with seed.
  6. If seed is already a RandomState instance,return it.
  7. Otherwise raise ValueError.
  8. """
  9. if seed is None or seed is np.random:
  10. return np.random.mtrand._rand
  11. if isinstance(seed, np.integer)):
  12. return np.random.RandomState(seed)
  13. if isinstance(seed, np.random.RandomState):
  14. return seed
  15. raise ValueError(''%r cannot be used to seed a numpy.random.RandomState''
  16. '' instance'' % seed)
项目:pohmm    作者:vmonaco    | 项目源码 | 文件源码
  1. def check_random_state(seed):
  2. """Turn seed into a np.random.RandomState instance
  3.  
  4. If seed is None, (numbers.Integral, np.random.RandomState):
  5. return seed
  6. raise ValueError(''%r cannot be used to seed a numpy.random.RandomState''
  7. '' instance'' % seed)
项目:PyDataLondon29-EmbarrassinglyParallelDAWithAWSLambda    作者:SignalMedia    | 项目源码 | 文件源码
  1. def _can_reindex(self, indexer):
  2. """
  3. *this is an internal non-public method*
  4.  
  5. Check if we are allowing reindexing with this particular indexer
  6.  
  7. Parameters
  8. ----------
  9. indexer : an integer indexer
  10.  
  11. Raises
  12. ------
  13. ValueError if its a duplicate axis
  14. """
  15.  
  16. # trying to reindex on an axis with duplicates
  17. if not self.is_unique and len(indexer):
  18. raise ValueError("cannot reindex from a duplicate axis")
项目:PyDataLondon29-EmbarrassinglyParallelDAWithAWSLambda    作者:SignalMedia    | 项目源码 | 文件源码
  1. def unique1d(values):
  2. """
  3. Hash table-based unique
  4. """
  5. if np.issubdtype(values.dtype, np.floating):
  6. table = _hash.Float64HashTable(len(values))
  7. uniques = np.array(table.unique(_ensure_float64(values)),
  8. dtype=np.float64)
  9. elif np.issubdtype(values.dtype, np.datetime64):
  10. table = _hash.Int64HashTable(len(values))
  11. uniques = table.unique(_ensure_int64(values))
  12. uniques = uniques.view(''M8[ns]'')
  13. elif np.issubdtype(values.dtype, np.timedelta64):
  14. table = _hash.Int64HashTable(len(values))
  15. uniques = table.unique(_ensure_int64(values))
  16. uniques = uniques.view(''m8[ns]'')
  17. elif np.issubdtype(values.dtype, np.integer):
  18. table = _hash.Int64HashTable(len(values))
  19. uniques = table.unique(_ensure_int64(values))
  20. else:
  21. table = _hash.PyObjectHashTable(len(values))
  22. uniques = table.unique(_ensure_object(values))
  23. return uniques
项目:PyDataLondon29-EmbarrassinglyParallelDAWithAWSLambda    作者:SignalMedia    | 项目源码 | 文件源码
  1. def shift(self, periods, axis=0, mgr=None):
  2. """ shift the block by periods """
  3. N = len(self.values.T)
  4. indexer = np.zeros(N, dtype=int)
  5. if periods > 0:
  6. indexer[periods:] = np.arange(N - periods)
  7. else:
  8. indexer[:periods] = np.arange(-periods, N)
  9. new_values = self.values.to_dense().take(indexer)
  10. # convert integer to float if necessary. need to do a lot more than
  11. # that,handle boolean etc also
  12. new_values, fill_value = com._maybe_upcast(new_values)
  13. if periods > 0:
  14. new_values[:periods] = fill_value
  15. else:
  16. new_values[periods:] = fill_value
  17. return [self.make_block_same_class(new_values,
  18. placement=self.mgr_locs)]
项目:PyDataLondon29-EmbarrassinglyParallelDAWithAWSLambda    作者:SignalMedia    | 项目源码 | 文件源码
  1. def to_sparse(self, fill_value=None, kind=''block''):
  2. """
  3. Convert to SparseDataFrame
  4.  
  5. Parameters
  6. ----------
  7. fill_value : float,default NaN
  8. kind : {''block'',''integer''}
  9.  
  10. Returns
  11. -------
  12. y : SparseDataFrame
  13. """
  14. from pandas.core.sparse import SparseDataFrame
  15. return SparseDataFrame(self._series, index=self.index,
  16. columns=self.columns, default_kind=kind,
  17. default_fill_value=fill_value)
项目:PyDataLondon29-EmbarrassinglyParallelDAWithAWSLambda    作者:SignalMedia    | 项目源码 | 文件源码
  1. def __getitem__(self, key):
  2.  
  3. # shortcut if we are an actual column
  4. is_mi_columns = isinstance(self.columns, MultiIndex)
  5. try:
  6. if key in self.columns and not is_mi_columns:
  7. return self._getitem_column(key)
  8. except:
  9. pass
  10.  
  11. # see if we can slice the rows
  12. indexer = convert_to_index_sliceable(self, key)
  13. if indexer is not None:
  14. return self._getitem_slice(indexer)
  15.  
  16. if isinstance(key, (Series, np.ndarray, Index, list)):
  17. # either boolean or fancy integer index
  18. return self._getitem_array(key)
  19. elif isinstance(key, DataFrame):
  20. return self._getitem_frame(key)
  21. elif is_mi_columns:
  22. return self._getitem_multilevel(key)
  23. else:
  24. return self._getitem_column(key)
项目:PyDataLondon29-EmbarrassinglyParallelDAWithAWSLambda    作者:SignalMedia    | 项目源码 | 文件源码
  1. def test_grouper_multilevel_freq(self):
  2.  
  3. # GH 7885
  4. # with level and freq specified in a pd.Grouper
  5. from datetime import date, timedelta
  6. d0 = date.today() - timedelta(days=14)
  7. dates = date_range(d0, date.today())
  8. date_index = pd.MultiIndex.from_product(
  9. [dates, dates], names=[''foo'', ''bar''])
  10. df = pd.DataFrame(np.random.randint(0, 100, 225), index=date_index)
  11.  
  12. # Check string level
  13. expected = df.reset_index().groupby([pd.Grouper(
  14. key=''foo'', freq=''W''), pd.Grouper(key=''bar'', freq=''W'')]).sum()
  15. # reset index changes columns dtype to object
  16. expected.columns = pd.Index([0], dtype=''int64'')
  17.  
  18. result = df.groupby([pd.Grouper(level=''foo'', pd.Grouper(
  19. level=''bar'', freq=''W'')]).sum()
  20. assert_frame_equal(result, expected)
  21.  
  22. # Check integer level
  23. result = df.groupby([pd.Grouper(level=0, pd.Grouper(
  24. level=1, expected)
项目:PyDataLondon29-EmbarrassinglyParallelDAWithAWSLambda    作者:SignalMedia    | 项目源码 | 文件源码
  1. def test_floats(self):
  2. arr = np.array([1., 2., 3., np.float64(4), np.float32(5)], dtype=''O'')
  3. result = lib.infer_dtype(arr)
  4. self.assertEqual(result, ''floating'')
  5.  
  6. arr = np.array([1, 2, 3, np.float32(5), ''foo''],
  7. dtype=''O'')
  8. result = lib.infer_dtype(arr)
  9. self.assertEqual(result, ''mixed-integer'')
  10.  
  11. arr = np.array([1, 4, 5], dtype=''f4'')
  12. result = lib.infer_dtype(arr)
  13. self.assertEqual(result, dtype=''f8'')
  14. result = lib.infer_dtype(arr)
  15. self.assertEqual(result, ''floating'')
项目:PyDataLondon29-EmbarrassinglyParallelDAWithAWSLambda    作者:SignalMedia    | 项目源码 | 文件源码
  1. def test_fancy_setitem_int_labels(self):
  2. # integer index defers to label-based indexing
  3.  
  4. df = DataFrame(np.random.randn(10, 5), index=np.arange(0, 20, 2))
  5.  
  6. tmp = df.copy()
  7. exp = df.copy()
  8. tmp.ix[[0, 4]] = 5
  9. exp.values[:3] = 5
  10. assert_frame_equal(tmp, exp)
  11.  
  12. tmp = df.copy()
  13. exp = df.copy()
  14. tmp.ix[6] = 5
  15. exp.values[3] = 5
  16. assert_frame_equal(tmp, exp)
  17.  
  18. tmp = df.copy()
  19. exp = df.copy()
  20. tmp.ix[:, 2] = 5
  21.  
  22. # tmp correctly sets the dtype
  23. # so match the exp way
  24. exp[2] = 5
  25. assert_frame_equal(tmp, exp)
项目:PyDataLondon29-EmbarrassinglyParallelDAWithAWSLambda    作者:SignalMedia    | 项目源码 | 文件源码
  1. def test_default_type_conversion(self):
  2. df = sql.read_sql_table("types_test_data", self.conn)
  3.  
  4. self.assertTrue(issubclass(df.FloatCol.dtype.type, np.floating),
  5. "FloatCol loaded with incorrect type")
  6. self.assertTrue(issubclass(df.IntCol.dtype.type, np.integer),
  7. "IntCol loaded with incorrect type")
  8. self.assertTrue(issubclass(df.BoolCol.dtype.type, np.bool_),
  9. "BoolCol loaded with incorrect type")
  10.  
  11. # Int column with NA values stays as float
  12. self.assertTrue(issubclass(df.IntColWithNull.dtype.type,
  13. "IntColWithNull loaded with incorrect type")
  14. # Bool column with NA values becomes object
  15. self.assertTrue(issubclass(df.BoolColWithNull.dtype.type, np.object),
  16. "BoolColWithNull loaded with incorrect type")
项目:PyDataLondon29-EmbarrassinglyParallelDAWithAWSLambda    作者:SignalMedia    | 项目源码 | 文件源码
  1. def test_default_type_conversion(self):
  2. df = sql.read_sql_table("types_test_data",
  3. "IntCol loaded with incorrect type")
  4. # sqlite has no boolean type,so integer type is returned
  5. self.assertTrue(issubclass(df.BoolCol.dtype.type,
  6. "IntColWithNull loaded with incorrect type")
  7. # Non-native Bool column with NA values stays as float
  8. self.assertTrue(issubclass(df.BoolColWithNull.dtype.type,
  9. "IntCol loaded with incorrect type")
  10. # MysqL has no real BOOL type (it''s an alias for tinyint)
  11. self.assertTrue(issubclass(df.BoolCol.dtype.type,
  12. "IntColWithNull loaded with incorrect type")
  13. # Bool column with NA = int column with NA values => becomes float
  14. self.assertTrue(issubclass(df.BoolColWithNull.dtype.type,
  15. "BoolColWithNull loaded with incorrect type")
项目:PyDataLondon29-EmbarrassinglyParallelDAWithAWSLambda    作者:SignalMedia    | 项目源码 | 文件源码
  1. def __init__(self, f, colspecs, delimiter, comment):
  2. self.f = f
  3. self.buffer = None
  4. self.delimiter = ''\\r\\n'' + delimiter if delimiter else ''\\n\\r\\t ''
  5. self.comment = comment
  6. if colspecs == ''infer'':
  7. self.colspecs = self.detect_colspecs()
  8. else:
  9. self.colspecs = colspecs
  10.  
  11. if not isinstance(self.colspecs, (tuple, list)):
  12. raise TypeError("column specifications must be a list or tuple,"
  13. "input was a %r" % type(colspecs).__name__)
  14.  
  15. for colspec in self.colspecs:
  16.  
  17. if not (isinstance(colspec, list)) and
  18. len(colspec) == 2 and
  19. isinstance(colspec[0], np.integer, type(None))) and
  20. isinstance(colspec[1], type(None)))):
  21. raise TypeError(''Each column specification must be ''
  22. ''2 element tuple or list of integers'')
项目:PyDataLondon29-EmbarrassinglyParallelDAWithAWSLambda    作者:SignalMedia    | 项目源码 | 文件源码
  1. def set_atom_categorical(self, block, items, info=None, values=None):
  2. # currently only supports a 1-D categorical
  3. # in a 1-D block
  4.  
  5. values = block.values
  6. codes = values.codes
  7. self.kind = ''integer''
  8. self.dtype = codes.dtype.name
  9. if values.ndim > 1:
  10. raise NotImplementedError("only support 1-d categoricals")
  11. if len(items) > 1:
  12. raise NotImplementedError("only support single block categoricals")
  13.  
  14. # write the codes; must be in a block shape
  15. self.ordered = values.ordered
  16. self.typ = self.get_atom_data(block, kind=codes.dtype.name)
  17. self.set_data(_block_shape(codes))
  18.  
  19. # write the categories
  20. self.Meta = ''category''
  21. self.set_Metadata(block.values.categories)
  22.  
  23. # update the info
  24. self.update_info(info)
项目:PyDataLondon29-EmbarrassinglyParallelDAWithAWSLambda    作者:SignalMedia    | 项目源码 | 文件源码
  1. def _handle_date_column(col, format=None):
  2. if isinstance(format, dict):
  3. return to_datetime(col, errors=''ignore'', **format)
  4. else:
  5. if format in [''D'', ''s'', ''ms'', ''us'', ''ns'']:
  6. return to_datetime(col, errors=''coerce'', unit=format, utc=True)
  7. elif (issubclass(col.dtype.type, np.floating) or
  8. issubclass(col.dtype.type, np.integer)):
  9. # parse dates as timestamp
  10. format = ''s'' if format is None else format
  11. return to_datetime(col, utc=True)
  12. elif com.is_datetime64tz_dtype(col):
  13. # coerce to UTC timezone
  14. # GH11216
  15. return (to_datetime(col, errors=''coerce'')
  16. .astype(''datetime64[ns,UTC]''))
  17. else:
  18. return to_datetime(col, format=format, utc=True)
项目:PyDataLondon29-EmbarrassinglyParallelDAWithAWSLambda    作者:SignalMedia    | 项目源码 | 文件源码
  1. def _get_dtype(self, sqltype):
  2. from sqlalchemy.types import (Integer, Float, Boolean, DateTime,
  3. Date, TIMESTAMP)
  4.  
  5. if isinstance(sqltype, Float):
  6. return float
  7. elif isinstance(sqltype, Integer):
  8. # Todo: Refine integer size.
  9. return np.dtype(''int64'')
  10. elif isinstance(sqltype, TIMESTAMP):
  11. # we have a timezone capable type
  12. if not sqltype.timezone:
  13. return datetime
  14. return DatetimeTZDtype
  15. elif isinstance(sqltype, DateTime):
  16. # Caution: np.datetime64 is also a subclass of np.number.
  17. return datetime
  18. elif isinstance(sqltype, Date):
  19. return date
  20. elif isinstance(sqltype, Boolean):
  21. return bool
  22. return object
项目:PyDataLondon29-EmbarrassinglyParallelDAWithAWSLambda    作者:SignalMedia    | 项目源码 | 文件源码
  1. def _sql_type_name(self, col):
  2. dtype = self.dtype or {}
  3. if col.name in dtype:
  4. return dtype[col.name]
  5.  
  6. col_type = self._get_notnull_col_dtype(col)
  7. if col_type == ''timedelta64'':
  8. warnings.warn("the ''timedelta'' type is not supported,and will be "
  9. "written as integer values (ns frequency) to the "
  10. "database.", UserWarning, stacklevel=8)
  11. col_type = "integer"
  12.  
  13. elif col_type == "datetime64":
  14. col_type = "datetime"
  15.  
  16. elif col_type == "empty":
  17. col_type = "string"
  18.  
  19. elif col_type == "complex":
  20. raise ValueError(''Complex datatypes not supported'')
  21.  
  22. if col_type not in _sql_TYPES:
  23. col_type = "string"
  24.  
  25. return _sql_TYPES[col_type][self.pd_sql.flavor]
项目:PyDataLondon29-EmbarrassinglyParallelDAWithAWSLambda    作者:SignalMedia    | 项目源码 | 文件源码
  1. def __getitem__(self, key):
  2. """
  3.  
  4. """
  5. try:
  6. return self._get_val_at(self.index.get_loc(key))
  7.  
  8. except KeyError:
  9. if isinstance(key, np.integer)):
  10. return self._get_val_at(key)
  11. raise Exception(''Requested index not in this series!'')
  12.  
  13. except TypeError:
  14. # Could not hash item,must be array-like?
  15. pass
  16.  
  17. # is there a case where this would NOT be an ndarray?
  18. # need to find an example,I took out the case for Now
  19.  
  20. key = _values_from_object(key)
  21. dataSlice = self.values[key]
  22. new_index = Index(self.index.view(ndarray)[key])
  23. return self._constructor(dataSlice, index=new_index).__finalize__(self)
项目:PyDataLondon29-EmbarrassinglyParallelDAWithAWSLambda    作者:SignalMedia    | 项目源码 | 文件源码
  1. def test_allclose(self):
  2. # Tests allclose on arrays
  3. a = np.random.rand(10)
  4. b = a + np.random.rand(10) * 1e-8
  5. self.assertTrue(allclose(a, b))
  6. # Test all close w/ masked
  7. a = masked_array(a)
  8. a[-1] = masked
  9. self.assertTrue(allclose(a, a))
项目:mlens    作者:flennerhag    | 项目源码 | 文件源码
  1. def check_random_state(seed):
  2. """Turn seed into a np.random.RandomState instance
  3. Parameters
  4. ----------
  5. seed : None | int | instance of RandomState
  6. If seed is None,return the RandomState singleton used by np.random.
  7. If seed is an int,return a new RandomState instance seeded with seed.
  8. If seed is already a RandomState instance,return it.
  9. Otherwise raise ValueError.
  10. """
  11. if seed is None or seed is np.random:
  12. return np.random.mtrand._rand
  13. if isinstance(seed, np.random.RandomState):
  14. return seed
  15. raise ValueError(''%r cannot be used to seed a numpy.random.RandomState''
  16. '' instance'' % seed)
项目:zorro    作者:C-CINA    | 项目源码 | 文件源码
  1. def guessCfgType( value ):
  2. # For guessing the data type (bool,integer,float,or string only) from ConfigParser
  3. if value.lower() == ''true'':
  4. return True
  5. if value.lower() == ''false'':
  6. return False
  7. try:
  8. value = np.int( value )
  9. return value
  10. except:
  11. pass
  12. try:
  13. value = np.float32( value )
  14. return value
  15. except:
  16. pass
  17. return value
项目:zipline-chinese    作者:zhanghan1990    | 项目源码 | 文件源码
  1. def check_window_length(window_length):
  2. """
  3. Ensure the window length provided to a transform is valid.
  4. """
  5. if window_length is None:
  6. raise InvalidWindowLength("window_length must be provided")
  7. if not isinstance(window_length, Integral):
  8. raise InvalidWindowLength(
  9. "window_length must be an integer-like number")
  10. if window_length == 0:
  11. raise InvalidWindowLength("window_length must be non-zero")
  12. if window_length < 0:
  13. raise InvalidWindowLength("window_length must be positive")
项目:zipline-chinese    作者:zhanghan1990    | 项目源码 | 文件源码
  1. def _extract_field_names(self, event):
  2. # extract field names from sids (price,volume etc),make sure
  3. # every sid has the same fields.
  4. sid_keys = []
  5. for sid in itervalues(event.data):
  6. keys = set([name for name, value in sid.items()
  7. if isinstance(value,
  8. (int,
  9. float,
  10. numpy.integer,
  11. numpy.float,
  12. numpy.long))
  13. ])
  14. sid_keys.append(keys)
  15.  
  16. # with CUSTOM data events,there may be different fields
  17. # per sid. So the allowable keys are the union of all events.
  18. union = set.union(*sid_keys)
  19. unwanted_fields = {
  20. ''portfolio'',
  21. ''sid'',
  22. ''dt'',
  23. ''type'',
  24. ''source_id'',
  25. ''_initial_len'',
  26. }
  27. return union - unwanted_fields
项目:gee-bridge    作者:francbartoli    | 项目源码 | 文件源码
  1. def RATWriteArray(rat, array, field, start=0):
  2. """
  3. Pure Python implementation of writing a chunk of the RAT
  4. from a numpy array. Type of array is coerced to one of the types
  5. (int,double,string) supported. Called from RasterattributeTable.WriteArray
  6. """
  7. if array is None:
  8. raise ValueError("Expected array of dim 1")
  9.  
  10. # if not the array type convert it to handle lists etc
  11. if not isinstance(array, numpy.ndarray):
  12. array = numpy.array(array)
  13.  
  14. if array.ndim != 1:
  15. raise ValueError("Expected array of dim 1")
  16.  
  17. if (start + array.size) > rat.GetRowCount():
  18. raise ValueError("Array too big to fit into RAT from start position")
  19.  
  20. if numpy.issubdtype(array.dtype, numpy.integer):
  21. # is some type of integer - coerce to standard int
  22. # Todo: must check this is fine on all platforms
  23. # confusingly numpy.int 64 bit even if native type 32 bit
  24. array = array.astype(numpy.int32)
  25. elif numpy.issubdtype(array.dtype, numpy.floating):
  26. # is some type of floating point - coerce to double
  27. array = array.astype(numpy.double)
  28. elif numpy.issubdtype(array.dtype, numpy.character):
  29. # cast away any kind of Unicode etc
  30. array = array.astype(numpy.character)
  31. else:
  32. raise ValueError("Array not of a supported type (integer,double or string)")
  33.  
  34. return RATValuesIONumPyWrite(rat, start, array)
项目:sea-lion-counter    作者:rdinse    | 项目源码 | 文件源码
  1. def default(self, obj):
  2. if isinstance(obj, np.integer):
  3. return int(obj)
  4. elif isinstance(obj, np.ndarray):
  5. return obj.tolist()
  6. elif isinstance(obj, np.floating):
  7. return float(obj)
  8. else:
  9. return super(MyEncoder, self).default(obj)
项目:NeoAnalysis    作者:neoanalysis    | 项目源码 | 文件源码
  1. def writeHDF5Meta(self, root, name, data, **dsOpts):
  2. if isinstance(data, np.ndarray):
  3. dsOpts[''maxshape''] = (None,) + data.shape[1:]
  4. root.create_dataset(name, data=data, **dsOpts)
  5. elif isinstance(data, list) or isinstance(data, tuple):
  6. gr = root.create_group(name)
  7. if isinstance(data, list):
  8. gr.attrs[''_MetaType_''] = ''list''
  9. else:
  10. gr.attrs[''_MetaType_''] = ''tuple''
  11. #n = int(np.log10(len(data))) + 1
  12. for i in range(len(data)):
  13. self.writeHDF5Meta(gr, str(i), data[i], dict):
  14. gr = root.create_group(name)
  15. gr.attrs[''_MetaType_''] = ''dict''
  16. for k, v in data.items():
  17. self.writeHDF5Meta(gr, k, v, int) or isinstance(data, float) or isinstance(data, np.integer) or isinstance(data, np.floating):
  18. root.attrs[name] = data
  19. else:
  20. try: ## strings,bools,None are stored as repr() strings
  21. root.attrs[name] = repr(data)
  22. except:
  23. print("Can not store Meta data of type ''%s'' in HDF5. (key is ''%s'')" % (str(type(data)), str(name)))
  24. raise
项目:NeoAnalysis    作者:neoanalysis    | 项目源码 | 文件源码
  1. def writeHDF5Meta(self, str(name)))
  2. raise
项目:mpnum    作者:dseuss    | 项目源码 | 文件源码
  1. def repeat(self, nr_sites):
  2. """Construct a longer MP-POVM by repetition
  3.  
  4. The resulting POVM will have length `nr_sites`. If `nr_sites`
  5. is not an integer multiple of `len(self)`,`self` must
  6. factorize (have leg dimension one) at the position where it
  7. will be cut. For example,consider the tensor product MP-POVM
  8. of Pauli X and Pauli Y. Calling `repeat(nr_sites=5)` will
  9. construct the tensor product POVM XYXYX:
  10.  
  11. >>> import mpnum as mp
  12. >>> import mpnum.povm as mpp
  13. >>> x,y = (mpp.MPPovm.from_local_povm(lp(3),1) for lp in
  14. ... (mpp.x_povm,mpp.y_povm))
  15. >>> xy = mp.chain([x,y])
  16. >>> xyxyx = mp.chain([x,y,x,x])
  17. >>> mp.norm(xyxyx - xy.repeat(5)) <= 1e-10
  18. True
  19.  
  20. """
  21. n_repeat, n_last = nr_sites // len(self), nr_sites % len(self)
  22. if n_last > 0:
  23. assert self.ranks[n_last - 1] == 1, \\
  24. "Partial repetition requires factorizing MP-POVM"
  25. return mp.chain([self] * n_repeat
  26. + ([MPPovm(self.lt[:n_last])] if n_last > 0 else []))
项目:mpnum    作者:dseuss    | 项目源码 | 文件源码
  1. def est_pmf(self, normalize=True, eps=1e-10):
  2. """Estimate probability mass function from samples
  3.  
  4. :param np.ndarray samples: `(n_samples,len(self.nsoutdims))`
  5. array of samples
  6. :param bool normalize: True: Return normalized probability
  7. estimates (default). False: Return integer outcome counts.
  8. :returns: Estimated probabilities as ndarray `est_pmf` with
  9. shape `self.nsoutdims`
  10.  
  11. `n_samples * est_pmf[i1,...,ik]` provides the number of
  12. occurences of outcome `(i1,ik)` in `samples`.
  13.  
  14. """
  15. n_samples = samples.shape[0]
  16. n_out = np.prod(self.nsoutdims)
  17. if samples.ndim > 1:
  18. samples = self.pack_samples(samples)
  19. counts = np.bincount(samples, minlength=n_out)
  20. assert counts.shape == (n_out,)
  21. counts = counts.reshape(self.nsoutdims)
  22. assert counts.sum() == n_samples
  23. if normalize:
  24. return counts / n_samples
  25. else:
  26. return counts

(Integer)100 == (Integer)100和(Integer)1000 == (Integer)1000 的结果为什么不一样

(Integer)100 == (Integer)100和(Integer)1000 == (Integer)1000 的结果为什么不一样

(Integer)100 == (Integer)100和(Integer)1000 == (Integer)1000 的结果为什么不一样

ArrayList<Integer> 的区别 a = new ArrayList<>();和 List<Integer> a = new ArrayList<>();

ArrayList 的区别 a = new ArrayList<>();和 List a = new ArrayList<>();

如何解决ArrayList<Integer> 的区别 a = new ArrayList<>();和 List<Integer> a = new ArrayList<>();?

我想知道,这两者之间有什么区别:

ArrayList<Integer> a = new ArrayList<>();List<Integer> a = new ArrayList<>();

另外,我想知道这两者的区别,ArrayList a = new ArrayList();

解决方法

从概念上讲,这两个语句在代码时间(编程时)和编译时间方面有所不同。

当你声明一个更深层次的(在继承方面)类型的变量时,你根本不会失去任何东西。因此,您可以使用 ArrayList 的所有方法并查看您的 IDE 提供的文档。

当您声明浅(就继承而言)类型的变量时,您“擦除了它的实际类型”。但不是真的。在运行时,您可以使用反射来恢复它:

List<Integer> simpleList = new ArrayList<>();
System.out.println(simpleList.getClass().getName());
// java.util.ArrayList

当你使用 list 时,它的行为仍然像 ArrayList(因为 它是 ArrayList)。但现在你把它当作 List(在 SOLID 中代表 L:Liskov 替换原则)。

但是如果你声明一个函数,不要使用 ArrayList,因为没有人能够传递 LinkedList(ahah,kidding) 或 Deque 或 Arrays.ArrayList 或任何他们会使用的东西。

ArrayList a = new ArrayList();等于ArrayList<Object> a = new ArrayList<>();

这被称为参数化类的原始使用。并且不建议这样做,即使您希望 ArrayList 为 Object 类型。

,

两者没有太大区别:

ArrayList<Integer> a = new ArrayList<>()List<Integer> a = new ArrayList<>()

但是建议使用接口而不是实现,像这样:

List<Integer> a = new ArrayList<>();

另外,最好的做法是使用 var 而不是 type:

var a = new ArrayList<Integer>();

ArrayList a = new ArrayList() 的情况下,您无需指定类型即可创建列表。这意味着这个列表可以包含多种类型的数据。

这是前两个数组列表的区别。在前两个列表中,您将无法添加 - 例如 - 字符串,只能添加整数。

Integer a = 1; Integer b = 1;

Integer a = 1; Integer b = 1;

Integer a = 1;
Integer b = 1;
Integer c = 500;
Integer d = 500;
System.out.println(a == b);
System.out.println(c == d);
Integer aa=new Integer(10);
Integer bb=new Integer(10);
int cc=10;
System.out.println(aa == bb);
System.out.println(aa == cc);



答案是
true
false
false
true

Integer a = 1;是自动装箱会调用Interger.valueOf(int)方法;该方法注释如下:
This method will always *** values in the range -128 to 127 inclusive, and may *** other values outside of this range.
也就是说IntegerCache类缓存了-128到127的Integer实例,在这个区间内调用valueOf不会创建新的实例。
Integer类型在-128-->127范围之间是被缓存了的,也就是每个对象的内存地址是相同的,赋值就直接从缓存中取,不会有新的对象产生,而大于这个范围,将会重新创建一个Integer对象,也就是new一个对象出来,当然地址就不同了,也就!=;

一、包装类和基本数据类型在进行“==”比较时,包装类会自动拆箱成基本数据类型,integer(0)会自动拆箱,结果为true

二、两个integer在进行“==”比较时,如果值在-128和127之间,结果为true,否则为false

三、两个包装类在进行“equals”比较时,首先会用equals方法判断其类型,如果类型相同,再继续比较值,如果值也相同,则结果为true

四、基本数据类型如果调用“equals”方法,但是其参数是基本类型,此时此刻,会自动装箱为包装类型

Integer a= 127 与 Integer b = 128相关

Integer a= 127 与 Integer b = 128相关

Integer a = 127;

Integer b = 127;

Integer c = 128;

Integer d = 128;

a == b 与 c == d 的比较结果是什么?

a == b 的结果为true 而 c == d的结果为false;

为什么会出现结果?百度之后发现问题的根源所在,在解决这个问题之前,先说说常量池的概念,百度百科上有

我再复述一遍。

Java常量池

常量池在java中用于保存编译期已经确定的,它包括了关于类,方法,接口中的常量,也包括字符串常量。例如
String s = "Java" 这种声明的方式。产生的这种"常量"就会被放到常量池,常量池是JVM的一块特殊的内存空间。
使用Java常量池技术,是为了方便快捷地创建某些对象,当你需要一个对象时候,就去这个池子里面找,找不到就在池子里面创建一个。但是必须注意 如果对象是用new 创建的。那么不管是什么对像,它是不会放到池子里的,而是向堆申请新的空间存储。
java中基本类型的包装类的大部分都实现了常量池技术,这些类是Byte,Short,Integer,Long,Character,Boolean,另外两种浮点数类型的包装类则没有实现。另外Byte,Short,Integer,Long,Character这5种整型的包装类也只是在对应值小于等于127时才可使用对象池。超过了就要申请空间创建对象了
”==“ 我们知道比较的是引用的地址(其实java不要允许用户直接对内存进行操作的的)。

当我们使用Integer a = 127 的时候 实际上调用的是下面这个方法:

1  public static Integer valueOf(int i) {
2         assert IntegerCache.high >= 127;
3         if (i >= IntegerCache.low && i <= IntegerCache.high)
4             return IntegerCache.cache[i + (-IntegerCache.low)];
5         return new Integer(i);
6     }
这个方法的首先断言了IntegerCache.high的值大于等于127(关于这里assert 大于等于127解释请看补充),否则退出方法。
接着if条件内i需要在low值和high值之间。
 

 

 

可以看到low为-128,即if条件需要i在-128和127之间,那么返回i+128作为整型数组 cache的下标,用来放在缓存中。这样也就是说任意一个相同数值的Integer的数,如果在-128和127之间,那么它们之间的内存地址是相同的。
这也就解释了为什么Integer a=127,b=127时候a==b返回true。
而如果if条件不满足则返回new Integer(i)。

即如果 数在 -128到127之间 就返回池子中的对象。没有的话就创建
--------------

其实如果问题是这样的话
Integer a = new Integer(127);

Integer b = new Integer(127);

Integer c = 128;

Integer d = 128;

a == b 一定为false 因为 没有去找常量池,a,b都是在堆中申请了空间 返回的引用肯定不一样。

今天关于Python numpy 模块-integer() 实例源码python中numpy模块的分享就到这里,希望大家有所收获,若想了解更多关于(Integer)100 == (Integer)100和(Integer)1000 == (Integer)1000 的结果为什么不一样、ArrayList 的区别 a = new ArrayList<>();和 List a = new ArrayList<>();、Integer a = 1; Integer b = 1;、Integer a= 127 与 Integer b = 128相关等相关知识,可以在本站进行查询。

本文标签: