想了解nosql-intro-original.pdf-MartinFowler的新动态吗?本文将为您提供详细的信息,我们还将为您解答关于中文翻译提纲的相关问题,此外,我们还将为您介绍关于ARecip
想了解nosql-intro-original.pdf-Martin Fowler的新动态吗?本文将为您提供详细的信息,我们还将为您解答关于中文翻译提纲的相关问题,此外,我们还将为您介绍关于A Recipe for Training Neural Networks [中文翻译,part 1]、AI-Knowledge-based agents: propositional logic, propositional theorem proving, propositional mode...、Ajax-'Origin localhost不允许Access-Control-Allow-Origin'、CentOs7 下安装Oracle11g startup时 报错ORA-00205: error in identifying control file, check alert log for more info的新知识。
本文目录一览:- nosql-intro-original.pdf-Martin Fowler(中文翻译提纲)
- A Recipe for Training Neural Networks [中文翻译,part 1]
- AI-Knowledge-based agents: propositional logic, propositional theorem proving, propositional mode...
- Ajax-'Origin localhost不允许Access-Control-Allow-Origin'
- CentOs7 下安装Oracle11g startup时 报错ORA-00205: error in identifying control file, check alert log for more info
nosql-intro-original.pdf-Martin Fowler(中文翻译提纲)
第一页:未来不只是Nosql数据库,而是混合持久化
关于企业数据存储的未来-主要写给参与企业应用开发管理的人
Martin Fowler,Pramod Sadalage 2012.11.26
第二页:sql已统治了二十年
第三页:但是,sql的支配地位正在崩溃
第四页:于是,就有了Nosql数据库
第五页:所以,这表示我们可以
第六页:但这不意味关系数据库已死
第七页:这将带领我们走向混合持久化
第八页:混合持久化是什么样的
第九页:混合持久化为企业提供了更多的机遇与挑战
第十页:什么样的系统可以选择混合持久化
第十一页:更多的信息...
2014.11 by ouyida3
A Recipe for Training Neural Networks [中文翻译,part 1]
最近拜读大神 Karpathy 的经验之谈 A Recipe for Training Neural Networks https://karpathy.github.io/2019/04/25/recipe/,这个秘籍对很多深度学习算法训练过程中遇到的各自问题进行了总结,并提出了很多很好的建议,翻译于此,希望能够帮助更多人学到这些内容。
译文如下:
几周前,我发布了一条关于 “最常见的神经网络错误” 的推文,列出了一些与训练神经网络相关的常见问题。这条推文得到了比我预期的要多得多的认同(包括网络研讨会:))。显然,很多人个人遇到了 “卷积层是这样工作的” 和 “我们的卷积网络达到最先进结果” 之间的巨大差距。
所以我认为如果清空我尘土飞扬的博客并将这个推文扩展到这个主题应该得到的长篇形式应该是有趣的事情。然而,与其列举常见的错误或深入分析它们,我更想深入挖掘并讨论如何避免出现这些错误(或者非常快速地修复它们)。这样做的关键是遵循某个过程,据我所知,这个过程并没有文档记录下来。让我们从促使我做这个的两个重要发现开始吧。
1. 神经网络训练的困难是一种假象
据称很容易开始训练神经网络。许多图书和框架都以展示了如果采用 30 行代码解决您的数据问题,并以此而自豪。这给大家一个非常错误的印象,即这些东西是即插即用。常见的示例代码如下:
>>> your_data = # 导入数据
>>> model = SuperCrossValidator(SuperDuper.fit, your_data, ResNet50, SGDOptimizer)
# 由此征服世界这些
这些库和示例激活了我们大脑里面熟悉标准软件的部分 - 通常它们认为干净和抽象的 API 是可以轻易获得的。 比如,后面的功能可以采用如下调用:
>>> r = requests.get(''https://api.github.com/user'', auth=(''user'', ''pass''))
>>> r.status_code
200
这很酷! 一个勇敢的开发人员已经承担了理解查询字符串,URL,GET / POST 请求,HTTP 连接等等的负担,并且在很大程度上隐藏了几行代码背后的复杂性。 这恰恰是我们熟悉和期待的。 不幸的是,神经网不是那样的。 它们不是 “现成的” 技术,第二个与训练 ImageNet 分类器略有不同。 我试图在我的帖子 “是的你应该理解反向传播” 中通过反向传播的例子说明这一点,并将其称为 “漏洞抽象”,但不幸的是情况更加可怕。 Backprop + SGD 并没有神奇地让你的网络正常工作。 Batch Norm 也不会神奇地使其收敛得更快。RNN 不会神奇地让你 “插入” 文本。 只是因为你可以将你的问题采用增强学习来建模,并不意味着你应该如此。 如果您坚持使用该技术而不了解其工作原理,则可能会失败。 这让我想到......
2. 神经网络训练无声地失败
当您写错或错误配置代码时,您通常会遇到某种异常。你输入了一个整数,但这本来应该是一个字符串的!某函数只需要 3 个参数!导入失败!键值不存在!两个列表中的元素个数不相等。此外,通常不会为特定功能创建单元测试。
解决了这些问题,也只是训练神经网络的开始。一切都可以在语法上正确,但整个训练没有妥善安排,而且很难说清楚。 “可能的错误面” 是很大的,逻辑(与语法相反)层面的问题,并且对单元测试非常棘手。例如,在数据增强期间左右翻转图像时,您可能忘记翻转数据标签。您的网络仍然可以(令人震惊地)工作得非常好,因为您的网络可以在内部学习检测翻转的图像,然后左右翻转其预测。或许你的自回归模型会因为一个偶然错误而意外地将它想要预测的东西作为输入。或者你想要修剪你的梯度但是修剪了损失,导致在训练期间异常样本被忽略。或者您从预训练模型初始化了您的权重,但没有使用原始均值。或者你只是用错了正则化权重,学习率,衰减率,模型大小等。因此,错误设置的神经网络只有在你运气好的时候才会抛出异常;大部分时间它会训练,但默默地输出看起来有点糟糕的结果。
因此,“快速和激烈” 方法训练的神经网络并不能发挥其作用,我们只会遭受其痛苦。 现在,痛苦是让神经网络运作良好的一个非常自然的过程,但可以通过对训练过程中的所有细节了然于胸来减轻训练过程的折磨。 在我的经验中,与深度学习成功最相关的品质是耐心和对细节的关注。
秘方
鉴于上述两个事实,我已经为自己开发了一个特定的过程,使我能够将神经网络应用于新问题。稍后我会竭力描述如何做到的。 你会发现它非常重视上述两个原则。 特别是,它遵循从简单到复杂的规律,并且在每一步我们对将要发生的事情做出具体假设,然后通过实验验证它们或进行检查直到我们发现了问题。 我们试图防止的是同时引入了许多复杂 “未经验证的” 问题,这必然会导致需要花很多时间去查找的错误 / 错误配置。 如果编写您的神经网络代码就像训练一样,您需要使用非常小的学习速率并猜测,然后在每次迭代后评估整个的测试集。
1. 了解你的数据
训练神经网络的第一步是根本不是接触任何神经网络代码,而是从彻底检查数据开始。这一步至关重要。我喜欢花费大量时间(以小时为单位)扫描数千个示例,了解它们的分布并寻找模式。幸运的是,你的大脑非常擅长这一点。有一次我发现了数据中包含重复的例子。另一次我发现了错误的图像 / 标签对。我通常会寻找不均衡的数据,也会关注自己对数据的分类过程,这些过程暗示了我们最终会尝试的各种架构。例如 - 我们需要局部特征还是全局上下文?数据有多少变化,这些变化采取什么形式?什么变化是假的,是可以预处理的?空间位置是否重要,或者我们是否想要将其平均化?细节有多重要,我们可以在多大程度上对图像进行下采样?标签有多少噪声?
此外,由于神经网络实际上可以看作压缩 / 编译的数据集,因此您将能够查看网络的(错误)预测并了解它们的来源。如果你的网络给你的预测看起来与你在数据中看到的内容不一致,那么就会有所收获。
一旦获得定性的感知,编写一些简单的代码来搜索 / 过滤 / 排序也是一个好主意,无论你能想到什么(例如标签的类型、大小、数量等),并可视化它们的分布,和沿任何坐标轴的异常值。异常值尤其能揭示数据质量或预处理中的一些错误。
2. 设置端到端的训练 / 评估框架 + 获得基准
当我们了解了数据之后,我们可以采用我们超级精彩的多尺度 ASPP FPN ResNet 训练牛 X 的模型吗? 答案是不。 这是一条充满痛苦的道路。 我们的下一步是建立一个完整的训练 + 评估框架,并通过一系列实验验证其正确性。在这个阶段,最好选择一些你能正确使用的简单模型 - 例如 线性分类器,或非常小的 ConvNet。 我们希望对其进行训练,可视化损失,任何其他指标(例如准确度),模型预测,并在此过程中使用明确的假设进行一系列实验。
这个阶段的需要注意的地方和建议主要包括:
- 设置固定的随机种子。始终使用固定的随机种子,以保证当您运行两遍代码两次时,可以获得相同的结果。这消除了随机因素,并将帮助您保持理智。
- 简化。确保禁用任何不必要的尝试。例如,在此阶段肯定要关闭任何数据扩展。数据扩充是一种我们可能在以后合并的正规化策略,但是此刻它也可能引入一些愚蠢错误。
- 在评估时尽量准确。在绘制测试损失时,对整个(大)测试集进行评估。不要只是在批量上绘制测试损失,然后依靠在 Tensorboard 中平滑它们。我们追求正确,并且非常愿意放弃保持理智的时间。
- 验证初始的损失。验证您的损失函数是否以正确的损失值开始。例如。如果正确初始化最后一层,则应在初始化时查看 softmax 上的 - log(1 /n_classes),这默认值可以为 L2 回归,Huber 损失等。
- 正确初始化。正确初始化每层权重。例如,如果你正在回归一些平均值为 50 的值,那么将偏差初始化为 50。如果你有一个正负样本比例为 1:10 的不平衡数据集,请在你的数据上设置偏差,使网络初始化时就可以预测为 0.1。正确设置这些将加速收敛并消除 “曲棍球棒” 损失曲线,因为在最初的几次迭代中,您的网络基本上只是学习偏差。
- 人工评测。设置损失之外,人工可以查验和解释的的指标(例如准确性)。尽可能评估您自己(人类)的对该问题的准确性并与之进行比较。或者,对测试数据进行两次标注,并且对于每个示例,分别标注预测值和真实值。
- 指定和数据无关的标准。训练一个独立于输入数据的标准,(例如,最简单的方法是将所有输入设置为零)。这应该比实际插入数据时更糟糕,而不会将其清零。这可以吗?即你的模型是否学会从输入中提取任何信息?
- 在少量数据上过拟合。看看模型是否能够在仅仅少数几个数据(例如少至两个)上过拟合。为此,我们增加了模型的容量(例如添加层或过滤器)并验证我们可以达到可实现的最低损失(例如零)。我还想在同一个图中同时显示标签和预测,并确保一旦达到最小损失,它们就会完美重合。如果他们不这样做,那么某个地方就会出现一个错误,我们无法继续下一个阶段。
- 验证减少训练损失。在这个阶段,你希望模型在数据集上欠拟合,因为你正在使用玩具模型。尝试稍微增加容量,看看您的训练损失是否应该下降?
- 在训练模型前再次查看数据。在您 y_hat = model(x)(或 tf 中的 sess.run)之前,查看数据是否正确。也就是说 - 您希望确切地了解将要网络中的内容,将原始张量的数据和标签解码并将其可视化。这是唯一的 “事实来源”。我无法计算这个多少次节省了我的时间,并揭示了数据预处理和扩充中的问题。
- 预测过程动态可视化。我喜欢在训练过程中对固定测试数据上的预测结果进行可视化。这些预测如何动态发生的,将为您提供令人难以置信的直觉,使您能够了解训练的进展情况。很多时候,如果网络以某种方式摆动太多,显示出不稳定性,就有可能感觉网络难以拟合您的数据。学习率非常低或非常高,抖动量也很明显。
- 使用反向传递来绘制依赖关系。您的深度学习代码通常包含复杂的,矢量化的和广播的操作。我遇到的一个相对常见的错误是人们弄错了(例如他们使用视图而不是某处的转置 / 置换)并且无意中混合了 batch 的维度信息。不巧的是,您的网络通常仍然可以正常训练,因为它会学会忽略其他示例中的数据。调试此问题(以及其他相关问题)的一种方法是将某个示例的损失设置为 1.0,将反向传递一直运行到输入,并确保仅在该样本上获得非零梯度。更一般地说,梯度为您提供了网络中数据的依赖关系信息,这对调试很有用。
- 设置一个特例。这是一个更通用的编码技巧,但我经常看到人们因为贪心,从头开始编写相对一般的功能,而导入 bug。我喜欢为我现在正在做的事情编写一个专门的函数,让它工作,然后在确保我得到相同的结果再将其重构成一般功能的函数。这通常适用于矢量化代码,其中我几乎总是首先写出完全循环的版本,然后一次一个循环地将它转换为矢量化代码。
AI-Knowledge-based agents: propositional logic, propositional theorem proving, propositional mode...
Knowledge-based agents
Intelligent agents need knowledge about the world in order to reach good decisions.
Knowledge is contained in agents in the form of sentences in a knowledge representation language that are stored in a knowledge base.
Knowledge base (KB): a set of sentences, is the central component of a knowledge-based agent. Each sentence is expressed in a language called a knowledge representation language and represents some assertion about the world.
Axiom: Sometimes we dignify a sentence with the name axiom, when the sentence is taken as given without being derived from other sentences.
TELL: The operation to add new sentences to the knowledge base.
ASK: The operation to query what is known.
Inference: Both TELL and ASK may involve, deriving new sentences from old.
The outline of a knowledge-based program:
A knowledge-base agent is composed of a knowledge base and an inference mechanism. It operates by storing sentences about the world in its knowledge base, using the inference mechanism to infer new sentences, and using these sentences to decide what action to take.
The knowledge-based agent is not an arbitrary program for calculating actions, it is amenable to a description at the knowledge level, where we specify only what the agent knows and what its goals are, in order to fix its behavior, the analysis is independent of the implementation level.
Declarative approach: A knowledge-based agent can be built simply by TELLing it what it needs to know. Starting with an empty knowledge base, the gent designer can TELL sentences one by one until the agent knows how to operate in its environment.
Procedure approach: encodes desired behaviors directly as program code.
A successful agent often combines both declarative and procedural elements in its design.
A fundamental property of logical reasoning: The conclusion is guaranteed to be correct if the available information is correct.
Logic
A representation language is defined by its syntax, which specifies the structure of sentences, and its semantics, which defines the truth of each sentence in each possible world or model.
Syntax: The sentences in KB are expressed according to the syntax of the representation language, which specifies all the sentences that are well formed.
Semantics: The semantics defines the truth of each sentence with respect to each possible world.
Models: We use the term model in place of “possible world” when we need to be precise. Possible world might be thought of as (potentially) real environments that the agent might or might not be in, models are mathematical abstractions, each of which simply fixes the truth or falsehood of every relevant sentences.
If a sentence α is true in model m, we say that m satisfies α, or m is a model of α. Notation M(α) means the set of all models of α.
The relationship of entailment between sentence is crucial to our understanding of reasoning. A sentence α entails another sentence β if β is true in all world where α is true. Equivalent definitions include the validity of the sentence α⇒β and the unsatisfiability of sentence α∧¬β.
Logical entailment: The relation between a sentence and another sentence that follows from it.
Mathematical notation: α ⊨ β: αentails the sentence β.
Formal definition of entailment:
α ⊨ β if and only if M(α) ⊆ M(β)
i.e. α ⊨ β if and only if, in every model in which αis true, β is also true.
(Notice: if α ⊨ β, then α is a stronger assertion than β: it rules out more possible worlds. )
Logical inference: The definition of entailment can be applied to derive conclusions.
E.g. Apply analysis to the wupus-world.
The KB is false in models that contradict what the agent knows. (e.g. The KB is false in any model in which [1,2] contains a pit because there is no breeze in [1, 1]).
Consider 2 possible conclusions α1and α2.
We see: in every model in which KB is true,α1 is also true. Hence KB⊨α1 , so the agent can conclude that there is no pit in [1, 2].
We see: in some models in which KB is true,α2 is false. Hence KB⊭α2, so the agent cannot conclude that there is no pit in [1, 2].
The inference algorithm used is called model checking: Enumerate all possible models to check that α is true in all models in which KB is true, i.e. M(KB) ⊆ M(α).
If an inference algorithm i can derive α from KB, we write KB⊨iα,pronounced as “α is derived from KB by i” or “i derives α from KB.”
Sound/truth preserving: An inference algorithm that derives only entailed sentences. Soundness is a highly desirable property. (e.g. model checking is a sound procedure when it is applicable.)
Completeness: An inference algorithm is complete if it can derive any sentence that is entailed. Completeness is also a desirable property.
Inference is the process of deriving new sentences from old ones. Sound inference algorithms derive only sentences that are entailed; complete algorithms derive all sentences that are entailed.
If KB is true in the real world, then any sentence α derived from KB by a sound inference procedure is also true in the real world.
Grounding: The connection between logical reasoning process and the real environment in which the agent exists.
In particular, how do we know that KB is true in the real world?
Propositional logic
Propositional logic is a simple language consisting of proposition symbols and logical connectives. It can handle propositions that are known true, known false, or completely unknown.
1. Syntax
The syntax defines the allowable sentences.
Atomic sentences: consist of a single proposition symbol, each such symbol stands for a proposition that can be true or false. (e.g. W1,3 stand for the proposition that the wumpus is in [1, 3].)
Complex sentences: constructed from simpler sentences, using parentheses and logical connectives.
Semantics
The semantics defines the rules for determining the truth of a sentence with respect to a particular model.
The semantics for propositional logic must specify how to compute the truth value of any sentence, given a model.
For atomic sentences: The truth value of every other proposition symbol must be specified directly in the model.
For complex sentences:
A simple inference procedure
To decide whether KB ⊨ α for some sentence α:
Algorithm 1: Model-checking approach
Enumerate the models (assignments of true or false to every relevant proposition symbol), check that α is true in every model in which KB is true.
e.g.
TT-ENTAILS?: A general algorithm for deciding entailment in propositional logic, performs a recursive enumeration of a finite space of assignments to symbols.
Sound and complete.
Time complexity: O(2n)
Space complexity: O(n), if KB and α contain n symbols in all.
Propositional theorem proving
We can determine entailment by model checking (enumerating models, introduced above) or theorem proving.
Theorem proving: Applying rules of inference directly to the sentences in our knowledge base to construct a proof of the desired sentence without consulting models.
Inference rules are patterns of sound inference that can be used to find proofs. The resolution rule yields a complete inference algorithm for knowledge bases that are expressed in conjunctive normal form. Forward chaining and backward chaining are very natural reasoning algorithms for knowledge bases in Horn form.
Logical equivalence:
Two sentences α and β are logically equivalent if they are true in the same set of models. (write as α ≡ β).
Also: α ≡ β if and only if α ⊨ β and β ⊨ α.
Validity: A sentence is valid if it is true in all models.
Valid sentences are also known as tautologies—they are necessarily true. Every valid sentence is logically equivalent to True.
The deduction theorem: For any sentence αand β, α ⊨ β if and only if the sentence (α ⇒ β) is valid.
Satisfiability: A sentence is satisfiable if it is true in, or satisfied by, some model. Satisfiability can be checked by enumerating the possible models until one is found that satisfies the sentence.
The SAT problem: The problem of determining the satisfiability of sentences in propositional logic.
Validity and satisfiability are connected:
α is valid iff ¬α is unsatisfiable;
α is satisfiable iff ¬α is not valid;
α ⊨ β if and only if the sentence (α∧¬β) is unsatisfiable.
Proving β from α by checking the unsatisfiability of (α∧¬β) corresponds to proof by refutation / proof by contradiction.
Inference and proofs
Inferences rules (such as Modus Ponens and And-Elimination) can be applied to derived to a proof.
·Modus Ponens:
Whenever any sentences of the form α⇒β and α are given, then the sentence β can be inferred.
·And-Elimination:
From a conjunction, any of the conjuncts can be inferred.
·All of logical equivalence (in Figure 7.11) can be used as inference rules.
e.g. The equivalence for biconditional elimination yields 2 inference rules:
·De Morgan’s rule
We can apply any of the search algorithms in Chapter 3 to find a sequence of steps that constitutes a proof. We just need to define a proof problem as follows:
·INITIAL STATE: the initial knowledge base;
·ACTION: the set of actions consists of all the inference rules applied to all the sentences that match the top half of the inference rule.
·RESULT: the result of an action is to add the sentence in the bottom half of the inference rule.
·GOAL: the goal is a state that contains the sentence we are trying to prove.
In many practical cases, finding a proof can be more efficient than enumerating models, because the proof can ignore irrelevant propositions, no matter how many of them they are.
Monotonicity: A property of logical system, says that the set of entailed sentences can only increased as information is added to the knowledge base.
For any sentences α and β,
If KB ⊨ αthen KB ∧β ⊨ α.
Monotonicity means that inference rules can be applied whenever suitable premises are found in the knowledge base, what else in the knowledge base cannot invalidate any conclusion already inferred.
Proof by resolution
Resolution: An inference rule that yields a complete inference algorithm when coupled with any complete search algorithm.
Clause: A disjunction of literals. (e.g. A∨B). A single literal can be viewed as a unit clause (a disjunction of one literal ).
Unit resolution inference rule: Takes a clause and a literal and produces a new clause.
where each l is a literal, li and m are complementary literals (one is the negation of the other).
Full resolution rule: Takes 2 clauses and produces a new clause.
where li and mj are complementary literals.
Notice: The resulting clause should contain only one copy of each literal. The removal of multiple copies of literal is called factoring.
e.g. resolve(A∨B) with (A∨¬B), obtain(A∨A) and reduce it to just A.
The resolution rule is sound and complete.
Conjunctive normal form
Conjunctive normal form (CNF): A sentence expressed as a conjunction of clauses is said to be in CNF.
Every sentence of propositional logic is logically equivalent to a conjunction of clauses, after converting a sentence into CNF, it can be used as input to a resolution procedure.
A resolution algorithm
e.g.
KB = (B1,1⟺(P1,2∨P2,1))∧¬B1,1
α = ¬P1,2
Notice: Any clause in which two complementary literals appear can be discarded, because it is always equivalent to True.
e.g. B1,1∨¬B1,1∨P1,2 = True∨P1,2 = True.
PL-RESOLUTION is complete.
Horn clauses and definite clauses
Definite clause: A disjunction of literals of which exactly one is positive. (e.g. ¬ L1,1∨¬Breeze∨B1,1)
Every definite clause can be written as an implication, whose premise is a conjunction of positive literals and whose conclusion is a single positive literal.
Horn clause: A disjunction of literals of which at most one is positive. (All definite clauses are Horn clauses.)
In Horn form, the premise is called the body and the conclusion is called the head.
A sentence consisting of a single positive literal is called a fact, it too can be written in implication form.
Horn clause are closed under resolution: if you resolve 2 horn clauses, you get back a horn clause.
Inference with horn clauses can be done through the forward-chaining and backward-chaining algorithms.
Deciding entailment with Horn clauses can be done in time that is linear in the size of the knowledge base.
Goal clause: A clause with no positive literals.
Forward and backward chaining
forward-chaining algorithm: PL-FC-ENTAILS?(KB, q) (runs in linear time)
Forward chaining is sound and complete.
e.g. A knowledge base of horn clauses with A and B as known facts.
fixed point: The algorithm reaches a fixed point where no new inferences are possible.
Data-driven reasoning: Reasoning in which the focus of attention starts with the known data. It can be used within an agent to derive conclusions from incoming percept, often without a specific query in mind. (forward chaining is an example)
Backward-chaining algorithm: works backward rom the query.
If the query q is known to be true, no work is needed;
Otherwise the algorithm finds those implications in the KB whose conclusion is q. If all the premises of one of those implications can be proved true (by backward chaining), then q is true. (runs in linear time)
in the corresponding AND-OR graph: it works back down the graph until it reaches a set of known facts.
(Backward-chaining algorithm is essentially identical to the AND-OR-GRAPH-SEARCH algorithm.)
Backward-chaining is a form of goal-directed reasoning.
Effective propositional model checking
The set of possible models, given a fixed propositional vocabulary, is finite, so entailment can be checked by enumerating models. Efficient model-checking inference algorithms for propositional logic include backtracking and local search methods and can often solve large problems quickly.
2 families of algorithms for the SAT problem based on model checking:
a. based on backtracking
b. based on local hill-climbing search
1. A complete backtracking algorithm
David-Putnam algorithm (DPLL):
DPLL embodies 3 improvements over the scheme of TT-ENTAILS?: Early termination, pure symbol heuristic, unit clause heuristic.
Tricks that enable SAT solvers to scale up to large problems: Component analysis, variable and value ordering, intelligent backtracking, random restarts, clever indexing.
Local search algorithms
Local search algorithms can be applied directly to the SAT problem, provided that choose the right evaluation function. (We can choose an evaluation function that counts the number of unsatisfied clauses.)
These algorithms take steps in the space of complete assignments, flipping the truth value of one symbol at a time.
The space usually contains many local minima, to escape from which various forms of randomness are required.
Local search methods such as WALKSAT can be used to find solutions. Such algorithm are sound but not complete.
WALKSAT: one of the simplest and most effective algorithms.
On every iteration, the algorithm picks an unsatisfied clause, and chooses randomly between 2 ways to pick a symbol to flip:
Either a. a “min-conflicts” step that minimizes the number of unsatisfied clauses in the new state;
Or b. a “random walk” step that picks the symbol randomly.
When the algorithm returns a model, the input sentence is indeed satifiable;
When the algorithm returns failure, there are 2 possible causes:
Either a. The sentence is unsatisfiable;
Or b. We need to give the algorithm more time.
If we set max_flips=∞, p>0, the algorithm will:
Either a. eventually return a model if one exists
Or b. never terminate if the sentence is unsatifiable.
Thus WALKSAT is useful when we expect a solution to exist, but cannot always detect unsatisfiability.
The landscape of random SAT problems
Underconstrained problem: When we look at satisfiability problems in CNF, an underconstrained problem is one with relatively few clauses constraining the variables.
An overconstrained problem has many clauses relative to the number of variables and is likely to have no solutions.
The notation CNFk(m, n) denotes a k-CNF sentence with m clauses and n symbols. (with n variables and k literals per clause).
Given a source of random sentences, where the clauses are chosen uniformly, independently and without replacement from among all clauses with k different literals, which are positive or negative at random.
Hardness: problems right at the threshold > overconstrained problems > underconstrained problems
Satifiability threshold conjecture: A theory says that for every k≥3, there is a threshold ratio rk, such that as n goes to infinity, the probability that CNFk(n, rn) is satisfiable becomes 1 for all values or r below the threshold, and 0 for all values above. (remains unproven)
Agent based on propositional logic
1. The current state of the world
We can associate proposition with timestamp to avoid contradiction.
e.g. ¬Stench3, Stench4
fluent: refer an aspect of the world that changes. (E.g. Ltx,y)
atemporal variables: Symbols associated with permanent aspects of the world do not need a time superscript.
Effect axioms: specify the outcome of an action at the next time step.
Frame problem: some information lost because the effect axioms fails to state what remains unchanged as the result of an action.
Solution: add frame axioms explicity asserting all the propositions that remain the same.
Representation frame problem: The proliferation of frame axioms is inefficient, the set of frame axioms will be O(mn) in a world with m different actions and n fluents.
Solution: because the world exhibits locaility (for humans each action typically changes no more than some number k of those fluents.) Define the transition model with a set of axioms of size O(mk) rather than size O(mn).
Inferential frame problem: The problem of projecting forward the results of a t step lan of action in time O(kt) rather than O(nt).
Solution: change one’s focus from writing axioms about actions to writing axioms about fluents.
For each fluent F, we will have an axiom that defines the truth value of Ft+1 in terms of fluents at time t and the action that may have occurred at time t.
The truth value of Ft+1 can be set in one of 2 ways:
Either a. The action at time t cause F to be true at t+1
Or b. F was already true at time t and the action at time t does not cause it to be false.
An axiom of this form is called a successor-state axiom and has this schema:
Qualification problem: specifying all unusual exceptions that could cause the action to fail.
2. A hybrid agent
Hybrid agent: combines the ability to deduce various aspect of the state of the world with condition-action rules, and with problem-solving algorithms.
The agent maintains and update KB as a current plan.
The initial KB contains the atemporal axioms. (don’t depend on t)
At each time step, the new percept sentence is added along with all the axioms that depend on t (such as the successor-state axioms).
Then the agent use logical inference by ASKING questions of the KB (to work out which squares are safe and which have yet to be visited).
The main body of the agent program constructs a plan based on a decreasing priority of goals:
1. If there is a glitter, construct a plan to grab the gold, follow a route back to the initial location and climb out of the cave;
2. Otherwise if there is no current plan, plan a route (with A* search) to the closest safe square unvisited yet, making sure the route goes through only safe squares;
3. If there are no safe squares to explore, if still has an arrow, try to make a safe square by shooting at one of the possible wumpus locations.
4. If this fails, look for a square to explore that is not provably unsafe.
5. If there is no such square, the mission is impossible, then retreat to the initial location and climb out of the cave.
Weakness: The computational expense goes up as time goes by.
3. Logical state estimation
To get a constant update time, we need to cache the result of inference.
Belief state: Some representation of the set of all possible current state of the world. (used to replace the past history of percepts and all their ramifications)
e.g.
We use a logical sentence involving the proposition symbols associated with the current time step and the temporal symbols.
Logical state estimation involves maintaining a logical sentence that describes the set of possible states consistent with the observation history. Each update step requires inference using the transition model of the environment, which is built from successor-state axioms that specify how each fluent changes.
State estimation: The process of updating the belief state as new percepts arrive.
Exact state estimation may require logical formulas whose size is exponential in the number of symbols.
One common scheme for approximate state estimation: to represent belief state as conjunctions of literals (1-CNF formulas).
The agent simply tries to prove Xt and ¬Xt for each symbol Xt, given the belief state at t-1.
The conjunction of provable literals becomes the new belief state, and the previous belief state is discarded.
(This scheme may lose some information as time goes along.)
The set of possible states represented by the 1-CNF belief state includes all states that are in fact possible given the full percept history. The 1-CNF belief state acts as a simple outer envelope, or conservative approximation.
4. Making plans by propositional inference
We can make plans by logical inference instead of A* search in Figure 7.20.
Basic idea:
1. Construct a sentence that includes:
a) Init0: a collection of assertions about the initial state;
b) Transition1, …, Transitiont: The successor-state axioms for all possible actions at each time up to some maximum time t;
c) HaveGoldt∧ClimbedOutt: The assertion that the goal is achieved at time t.
2. Present the whole sentence to a SAT solver. If the solver finds a satisfying model, the goal is achievable; else the planning is impossible.
3. Assuming a model is found, extract from the model those variables that represent actions and are assigned true.
Together they represent a plan to ahieve the goals.
Decisions within a logical agent can be made by SAT solving: finding possible models specifying future action sequences that reach the goal. This approach works only for fully observable or sensorless environment.
SATPLAN: A propositional planning. (Cannot be used in a partially observable environment)
SATPLAN finds models for a sentence containing the initial sate, the goal, the successor-state axioms, and the action exclusion axioms.
(Because the agent does not know how many steps it will take to reach the goal, the algorithm tries each possible number of steps t up to some maximum conceivable plan length Tmax.)
Precondition axioms: stating that an action occurrence requires the preconditions to be satisfied, added to avoid generating plans with illegal actions.
Action exclusion axioms: added to avoid the creation of plans with multiple simultaneous actions that interfere with each other.
Propositional logic does not scale to environments of unbounded size because it lacks the expressive power to deal concisely with time, space and universal patterns of relationshipgs among objects.
Ajax-'Origin localhost不允许Access-Control-Allow-Origin'
如何解决Ajax-''Origin localhost不允许Access-Control-Allow-Origin''?
此错误是由于跨域资源共享中强加了限制。这已作为安全功能的一部分实现,以通过跨域调用来限制资源的客户端(域)。当您将请求发送到Web服务或api或类似工具时,它会在服务器或目标(此处是api)的请求中添加Origin标头,以验证请求是否来自授权来源。理想情况下,API
/服务器应Origin
在Request
header
收到的信息中查找,并可能针对允许向其提供资源的一组来源(域)进行验证。如果来自允许的域,它将在响应标头中添加与"Access-
Control-Allow-
Origin"
值。也可以使用通配符,但是问题是,在获得通配符许可的情况下,任何人都可以发出请求并将其送达(有一些限制,例如通过Windows
auth或cookie对api进行身份验证,不允许您发送该withCredentials
值*
)
)。使用通配符来源的响应标头不是一个好习惯,因为它对所有人开放。
这些是使用值设置响应头的方法:-
Access-Control-Allow-Origin: *
Access-Control-Allow-Origin: http://yourdomain.com
您甚至可以在同一响应中添加多个Access-Control-Allow-Origin标头(我相信在大多数浏览器中都可以使用)
Access-Control-Allow-Origin: http://yourdomain1.com
Access-Control-Allow-Origin: http://yourdomain2.com
Access-Control-Allow-Origin: http://yourdomain3.com
在服务器端(c#语法),您可以这样做:
var sourceDomain = Request.Headers["Origin"]; //This gives the origin domain for the request
Response.AppendHeader("Access-Control-Allow-Origin", sourceDomain ); //Set the response header with the origin value after validation (if any) .Depending on the type of application you are using Syntax may vary.
希望这可以帮助!!!
解决方法
我是Ajax的新手,只是受过此跨域调用的任务。我们的网页上有一个文本框,用户可用来执行公司名称搜索。通过单击文本框旁边的按钮,将请求Ajax调用。不幸的是,Web服务位于单独的域中,因此自然会引起问题。
以下是我使这项工作的最佳尝试。我还要注意,此调用的目的是以XML格式返回结果,该结果将success
在请求的一部分中进行解析。
这又是错误消息:
Origin http://localhost:55152 is not allowed by Access-Control-Allow-Origin.
对于解决方法,我不知所措,将不胜感激任何想法。
function GetProgramDetails() {
var URL = "http://quahildy01/xRMDRMA02/xrmservices/2011/OrganizationData.svc/AccountSet?$select=AccountId,Name,neu_UniqueId&$filter=startswith(Name,\''" + $(''.searchbox'').val() + "\'')";
var request = $.ajax({
type: ''POST'',url: URL,contentType: "application/x-www-form-urlencoded",crossDomain: true,dataType: XMLHttpRequest,success: function (data) {
console.log(data);
alert(data);
},error: function (data) {
console.log(data);
alert("Unable to process your resquest at this time.");
}
});
}
CentOs7 下安装Oracle11g startup时 报错ORA-00205: error in identifying control file, check alert log for more info
Tue Mar 05 09:43:52 2019
ALTER DATABASE MOUNT
ORA-00210: cannot open the specified control file
ORA-00202: control file: ''/u01/oracle/flash_recovery_area/orcl/control02.ctl''
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
ORA-00210: cannot open the specified control file
ORA-00202: control file: ''/u01/oracle/oradata/orcl/control01.ctl''
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 3
ORA-205 signalled during: ALTER DATABASE MOUNT...
控制文件全部没有 应该怎么去生成控制文件
今天关于nosql-intro-original.pdf-Martin Fowler和中文翻译提纲的介绍到此结束,谢谢您的阅读,有关A Recipe for Training Neural Networks [中文翻译,part 1]、AI-Knowledge-based agents: propositional logic, propositional theorem proving, propositional mode...、Ajax-'Origin localhost不允许Access-Control-Allow-Origin'、CentOs7 下安装Oracle11g startup时 报错ORA-00205: error in identifying control file, check alert log for more info等更多相关知识的信息可以在本站进行查询。
本文标签: