GVKun编程网logo

是什么导致错误“ _pickle.UnpicklingError:无效的加载密钥''。”?(compileonly解决无效)

8

本文将带您了解关于是什么导致错误“_pickle.UnpicklingError:无效的加载密钥''。”?的新内容,同时我们还将为您解释compileonly解决无效的相关知识,另外,我们还将为您提供

本文将带您了解关于是什么导致错误“ _pickle.UnpicklingError:无效的加载密钥''。”?的新内容,同时我们还将为您解释compileonly解决无效的相关知识,另外,我们还将为您提供关于dill:解决python的“AttributeError: Can't pickle local object”及无法pickle lambda函数的问题、FileNotFoundError:[错误2]没有这样的文件或目录:'y.pickle'、multiprocessing.log_to_stderr()在2.6中报UnpickleableError: Cannot pickle objects错误、multiprocessing.Pool-PicklingError:无法腌制 :属性查找thread.lock失败的实用信息。

本文目录一览:

是什么导致错误“ _pickle.UnpicklingError:无效的加载密钥''。”?(compileonly解决无效)

是什么导致错误“ _pickle.UnpicklingError:无效的加载密钥''。”?(compileonly解决无效)

我正在尝试在一个数组上存储5000个数据元素。这5000个元素存储在一个现有文件中(因此它不为空)。

但是我遇到错误,我不知道是什么原因引起的。

在:

def array():    name = ''puntos.df4''    m = open(name, ''rb'')    v = []*5000    m.seek(-5000, io.SEEK_END)    fp = m.tell()    sz = os.path.getsize(name)    while fp < sz:        pt = pickle.load(m)        v.append(pt)    m.close()    return v

出:

line 23, in arraypt = pickle.load(m)_pickle.UnpicklingError: invalid load key, ''''.

答案1

小编典典

酸洗是递归的,不是顺序的。因此,要腌制一个列表,pickle将开始腌制包含的列表,然后腌制第一个元素……潜入第一个元素并腌制相关性和子元素,直到第一个元素被序列化。然后移动到列表的下一个元素,依此类推,直到最终完成列表并完成对包含列表的序列化。简而言之,除了某些特殊情况外,很难将递归泡菜视为顺序的。dump如果您想以load一种特殊的方式,最好在您的样式上使用更智能的模式。

最常见的泡菜,它来腌制用一个单一的一切dump到一个文件-
但你必须要load一切在一个单一一次load。但是,如果您打开文件句柄并进行多次dump调用(例如,对列表中的每个元素进行一次调用,或选定元素的元组),则您load将镜像……打开文件句柄并进行多次load调用,直到获得所有列出元素并可以重建列表。但是,选择性地load仅选择某些列表元素仍然不容易。为此,您可能必须dict使用像这样的包将列表元素存储为(以元素或块的索引为键)作为列表元素klepto,这样可以将腌制的食物分解dict
透明地放入多个文件中,并可以轻松加载特定元素。

dill:解决python的“AttributeError: Can't pickle local object”及无法pickle lambda函数的问题

dill:解决python的“AttributeError: Can't pickle local object”及无法pickle lambda函数的问题

python的pickle是用来序列化对象很方便的工具,但是pickle对传入对象的要求是不能是内部类,也不能是lambda函数。

比如尝试pickle这个内部类:

分享图片

结果会报错AttributeError: Can‘t pickle local object

这个问题可以用第三方库dill来解决: (https://pypi.org/project/dill/)

分享图片

结果:

分享图片

dill除了可以处理pickle可以处理的标准类型外:

分享图片

嗯,还是很好用的。

(另外python内建库shelve也是用的pickle做后端,所以默认也是不能处理内部类和lambda函数的。)

FileNotFoundError:[错误2]没有这样的文件或目录:'y.pickle'

FileNotFoundError:[错误2]没有这样的文件或目录:'y.pickle'

如何解决FileNotFoundError:[错误2]没有这样的文件或目录:''y.pickle''?

我正在关注this code tutorial-如何使用训练有素的模型-Python,TensorFlow和Keras的深度学习基础知识。

下面是我正在使用的代码:

import tensorflow as tf
import numpy as np
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense,Dropout,Activation,Flatten,Conv2D,MaxPooling2D
import pickle

X = pickle.load(open("X.pickle","rb"))
y = pickle.load(open("y.pickle","rb"))

X = X/255.0


model = Sequential()
model.add(Conv2D(64,(3,3),input_shape = X.shape[1:]))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Conv2D(64,3)))
model.add(Activation("relu"))
model.add(MaxPooling2D(pool_size=(2,2)))

model.add(Flatten())
model.add(Dense(64))

model.add(Dense(1))
model.add(Activation(''sigmoid''))

model.compile(loss="binary_crossentropy",optimizer="Adam",metrics=["accuracy"])

model.fit(X,y,batch_size=32,validation_split=0.1)

我遇到了以下错误:

FileNotFoundError                         Traceback (most recent call last)
<ipython-input-15-8560275e4263> in <module>
      6 
      7 X = pickle.load(open("X.pickle","rb"))
----> 8 y = pickle.load(open("y.pickle","rb"))
      9 
     10 X = X/255.0

FileNotFoundError: [Errno 2] No such file or directory: ''y.pickle''

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)

multiprocessing.log_to_stderr()在2.6中报UnpickleableError: Cannot pickle <type ''thread.lock''> objects错误

multiprocessing.log_to_stderr()在2.6中报UnpickleableError: Cannot pickle objects错误

报错现象:

UnpickleableError: Cannot pickle <type ''thread.lock''> objects;

google没找到解决办法。。。求各位大牛帮忙解决

背景:

写了一个python的压测websocket工具脚本,加入日志的时候发现报错:

环境:

在python2.6中运行

代码:

#!/usr/bin/env python
#coding:UTF-8
''''''
#=================================================================
Created on 2017.2.27
@Author :
@Desc : 
1. ws压测工具;拉起多个进程批量发送请求;
2. 长连接模式:ws通道复用
3. 短连接模式:建立ws通道,发送一次请求,断开ws通道
@FileName : WSPerformanceTestTools.py
@version: 1.0 
@Date : 2017-04-05
#=================================================================
''''''

import websocket
import types
import copy_reg
import multiprocessing
from multiprocessing import freeze_support
import hashlib
import os,time,traceback
import thread,random
import logging,pdb
from optparse import OptionParser 
import sys
import this

reload(sys)
sys.setdefaultencoding(''utf-8'')
#pdb.set_trace()

def _pickle_method(m):
    if m.im_self is None:
        return getattr, (m.im_class, m.im_func.func_name)
    else:
        return getattr, (m.im_self, m.im_func.func_name)
    
def _unpickle_method(func_name, obj, cls):
    for cls in cls.mro():
        try:
            func = cls.__dict__[func_name]
        except KeyError:
            pass
    return func.__get__(obj, cls)
copy_reg.pickle(types.MethodType, _pickle_method,_unpickle_method)



class CMDArgs:
    """命令行参数处理类."""
    def __init__(self):
        self.parser = None
        self.options = None
        self.args = None
        self.cmd_args_init()

    def cmd_args_init(self):
        """命令行参数初始化."""        
        parser = OptionParser(usage="python WSPerformanceTestTools.py [options] ")
        parser.add_option("-u", type="string", default="ws//wxa.jd.com/ws", dest="url", help="url")  
        parser.add_option("-p", type="int", default="1", dest="processnum", help="process  num ")
        parser.add_option("-f", type="string", default="", dest="contextfile", help="text context file ")
        parser.add_option("-r", type="string", default="", dest="resultfile", help="result and log file")
        parser.add_option("-l", type="int", default="0", dest="logflag", help="log flag")
        #parser.add_option("-m", type="boolean", default="", dest="mode", help="time or send num")
        self.parser = parser

    def cmd_args_parse(self):
        """解析输入参数."""
        self.options, self.args = self.parser.parse_args()
        #print dir(self.options), self.args  
        self.cmd_check_args()
        self.cmd_check_filepath()

    def print_usage(self):
        """打印命令行参数帮助信息."""
        self.parser.print_help()

    def cmd_check_args(self):
        """检查输入的非选项参数格式是否正确."""
        if self.options.url is  None or self.options.url == ""  :
            raise CMDArgsException("url error !")
        if self.options.processnum is  None or self.options.processnum <= 0  :
            raise CMDArgsException("process num error !")
        if self.options.contextfile is  None or self.options.contextfile == "" :
            raise CMDArgsException("send text file path error!")
        if self.options.resultfile is  None or self.options.resultfile == "" :
            raise CMDArgsException("result file path error!")
        if self.options.logflag is  None :
            self.options.logflag = false
    def cmd_check_filepath(self):
        pass




class PerformanceTool(object):
    ''''''全局变量''''''
    _processnum = 0 #进程数
    _contextfile = "" #发送的文本内容文件路径
    _resultfile = "" #执行结果和日志保存路径
    _logflag = 0 #log 日志开关,0:关闭 1:开启
    _timeout = 0 #超时设置
    _url = ""
    #_flag = 0 #所有进程就绪标志位
    
    sendText = "" #发送的文本内容
    sendMode = 1 #请求模式:0:长连接,1:短连接
    calcMode = 1 #0:执行时长,1:请求数量
    sendnum = 0 #单个线程请求数量
    sendTime = 0 #请求时长
    checkkey = ""
    
    ''''''结果统计''''''
    file = None
    _totalnum = multiprocessing.Value("l",0,lock=True)
    _Success = multiprocessing.Value("l",0,lock=True)
    _error = multiprocessing.Value("l",0,lock=True)
    _qps = 0
    _worktime = 0
    _min = multiprocessing.Value("f",1000,lock=True)
    _max = multiprocessing.Value("f",0,lock=True)
    
    STATUS_NORMAL = 1000
    pool = None 
    jobs = []

    LOGFORMAT = ''%(asctime)-15s %(clientip)s %(user)-8s %(message)s''
    logger = None
    logpath = str(__name__ + ".log")
    fhdlr = None
    
    def __init__(self,
                 url,
                 _processnum,
                 _contextfile,
                 _resultfile,
                 _timeout,
                 sendnum,
                 sendTime,
                 checkkey,
                 sendMode,
                 calcMode
                 ):
        self._url = url
        self._processnum = _processnum
        self._contextfile = _contextfile
        self._resultfile = _resultfile
        self._timeout = _timeout
        self.sendnum = sendnum
        self.sendTime = sendTime
        self.checkkey = checkkey
        self.sendMode = sendMode
        self.calcMode = calcMode
        self.pool = multiprocessing.Pool(processes=_processnum)
        ''''''log config''''''
        multiprocessing.log_to_stderr()
        self.logger = multiprocessing.get_logger()
        self.logger.setLevel(logging.DEBUG)
        
        
    def suit(self,i):
        
        if self.calcMode == 0 : #执行时长
            self.logger.debug("run with excuting time")
            self._sendTextByTime(self._totalnum,self._Success,self._error,self._min,self._max)
            
        else: #请求数量
            self.logger.debug("run with send number")
            self._sendTextByNum(self._totalnum,self._Success,self._error,self._min,self._max)
  
    def test(self,i):
        print "pid: " + str(os.getpid())      
    def run(self):
        self._getSendText()
        self.logger.debug("_getSendText: %s",self.sendText)
        begintime = time.time()
        self.logger.debug("-------------run begintime: %s-------------",long(begintime))
        iter = list(range(self._processnum))
        result = []
        result.append(self.pool.map_async(self.suit, iter,chunksize=1))
        self.logger.debug("f pid : %s",os.getpid())
        self.pool.close()
        self.pool.join()
        
        endtime = time.time()
        self.logger.debug("endtime: %s",long(endtime))
        self._worktime = endtime - begintime
        self.logger.debug("-------------run endtime: %s-------------",long(endtime)) 
    def checkResp(self,key,resp):
        if key != "" and key != None and resp != "" and resp != None:
            if resp.find(key) > 0 : 
                return 1
        else:
            print str(resp)
            self.logger.debug( "error resp: %s",str(resp))
            return 0
    def sendAndRecv(self,ws,Success,error,min,max):
        try:
            begintime = time.time()
            ws.send(self.sendText)
            resp = ws.recv()
            endtime = time.time()
            if self.checkResp(self.checkkey,resp) == 1: 
                Success.value +=  1
            else:
                error.value += 1
        except:
            error.value += 1
            ws.connect(self._url)
        
        reqtime = endtime - begintime
        self.logger.debug( "pid: %d,reqtime: %s" , os.getpid(), str(reqtime))
        if reqtime < min.value:
            min.value = reqtime
        if reqtime > max.value:
            max.value = reqtime
            
    def calcResult(self):
        self._qps = float(self._totalnum.value / self._worktime)
        file = open(self._resultfile,"aw")
        resultseq = "excutetime: " + str(time.time()) + "\r\n"\
                     "worktime: " + str(self._worktime) + "\r\n"\
                     "totalnum: " + str(self._totalnum.value) + "\r\n"\
                     "Success: " + str(self._Success.value) + "\r\n"\
                     "error: " + str(self._error.value) + "\r\n"\
                     "qps: " + str(self._qps) + "\r\n"\
                     "min: " + str(self._min.value) + "\r\n"\
                     "max: " + str(self._max.value) + "\r\n\r\n" 
        file.writelines(resultseq)
        file.close()
        
    
    def _getSendText(self):
        try:
            file = open(self._contextfile,"r")
            self.sendText = file.read()
        except:
            self.logger.debug(format(traceback.print_exc()))
            print "getSendText exception:{}".format(traceback.print_exc())
            self.sendText = ""
        self.sendText = self.sendText.decode().encode("utf-8")
        file.close() 
    
    def _sendTextByNum(self,totalnum,Success,error,min,max):
         ws = websocket.WebSocket()
         ws.settimeout(self._timeout)
         if self.sendMode == 0: #长连接
            ws.connect(self._url)
         for i in range(0,self.sendnum):
                if self.sendMode == 1: #短连接
                    ws.connect(self._url)
                self.sendAndRecv(ws,Success,error,min,max)
                if self.sendMode == 1: #短连接
                    ws.send_close(self.STATUS_NORMAL, "close")
                totalnum.value += 1
                if(self._totalnum >= self.sendnum ):
                    exit
         if self.sendMode == 0: #长连接
            ws.send_close(self.STATUS_NORMAL, "close")
    def _sendTextByTime(self,totalnum,Success,error,min,max):
        ws = websocket.WebSocket()
        ws.settimeout(self._timeout)
        begintime = time.time()
        if self.sendMode == 0: #长连接
            ws.connect(self._url)
        while(time.time() <= begintime + self.sendTime ):
            if self.sendMode == 1: #短连接
                ws.connect(self._url)
            self.sendAndRecv(ws,Success,error,min,max)
            if self.sendMode == 1: #短连接
                ws.send_close(self.STATUS_NORMAL, "close")
            totalnum.value += 1   
            if(self._totalnum >= self.sendnum ):
                exit
                
        if self.sendMode == 0: #长连接
            ws.send_close(self.STATUS_NORMAL, "close")

    def __getstate__(self):
        self_dict = self.__dict__.copy()
        del self_dict[''pool'']
        return self_dict
    
    def __setstate__(self, state):
        self.__dict__.update(state)
        
if __name__ == "__main__":
    freeze_support()
    #cmd = CMDArgs()
    #cmd.cmd_args_parse()
    url = "ws://xxx.xxx.xxx/ws"
    pft = PerformanceTool(url,2,"testJson.txt","result.txt",5,60,10,"jfs",0,0)
    pft.run()
    pft.calcResult()

 

multiprocessing.Pool-PicklingError:无法腌制 :属性查找thread.lock失败

multiprocessing.Pool-PicklingError:无法腌制 :属性查找thread.lock失败

multiprocessing.Pool令我发疯……
我想升级许多软件包,对于每个软件包,我都必须检查是否有更大的版本。这是通过check_one函数来完成的。
主要代码在Updater.update方法中:在此处创建Pool对象并调用该map()方法。

这是代码:

def check_one(args):
    res,total,package,version = args
    i = res.qsize()
    logger.info('\r[{0:.1%} - {1},{2} / {3}]',i / float(total),i,addn=False)
    try:
        json = PyPIJson(package).retrieve()
        new_version = Version(json['info']['version'])
    except Exception as e:
        logger.error('Error: Failed to fetch data for {0} ({1})',e)
        return
    if new_version > version:
        res.put_nowait((package,version,new_version,json))

class Updater(FileManager):

    # __init__ and other methods...

    def update(self):    
        logger.info('Searching for updates')
        packages = Queue.Queue()
        data = ((packages,self.set_len,dist.project_name,Version(dist.version)) \
            for dist in self.working_set)
        pool = multiprocessing.Pool()
        pool.map(check_one,data)
        pool.close()
        pool.join()
        while True:
            try:
                package,json = packages.get_nowait()
            except Queue.Empty:
                break
            txt = 'A new release is avaiable for {0}: {1!s} (old {2}),update'.format(package,version)
            u = logger.ask(txt,bool=('upgrade version','keep working version'),dont_ask=self.yes)
            if u:
                self.upgrade(package,json,new_version)
            else:
                logger.info('{0} has not been upgraded',package)
        self._clean()
        logger.success('Updating finished successfully')

当我运行它时,我得到这个奇怪的错误:

Searching for updates
Exception in thread Thread-1:
Traceback (most recent call last):
  File "/usr/lib/python2.7/threading.py",line 552,in __bootstrap_inner
    self.run()
  File "/usr/lib/python2.7/threading.py",line 505,in run
    self.__target(*self.__args,**self.__kwargs)
  File "/usr/local/lib/python2.7/dist-packages/multiprocessing/pool.py",line 225,in _handle_tasks
    put(task)
PicklingError: Can't pickle <type 'thread.lock'>: attribute lookup thread.lock failed

关于是什么导致错误“ _pickle.UnpicklingError:无效的加载密钥''。”?compileonly解决无效的介绍已经告一段落,感谢您的耐心阅读,如果想了解更多关于dill:解决python的“AttributeError: Can't pickle local object”及无法pickle lambda函数的问题、FileNotFoundError:[错误2]没有这样的文件或目录:'y.pickle'、multiprocessing.log_to_stderr()在2.6中报UnpickleableError: Cannot pickle objects错误、multiprocessing.Pool-PicklingError:无法腌制 :属性查找thread.lock失败的相关信息,请在本站寻找。

本文标签: