在本文中,我们将带你了解php–将Google搜索查询转换为PostgreSQL“tsquery”在这篇文章中,我们将为您详细介绍php–将Google搜索查询转换为PostgreSQL“tsquer
在本文中,我们将带你了解php – 将Google搜索查询转换为PostgreSQL“tsquery”在这篇文章中,我们将为您详细介绍php – 将Google搜索查询转换为PostgreSQL“tsquery”的方方面面,并解答google搜索引擎网址格式常见的疑惑,同时我们还将给您一些技巧,以帮助您实现更有效的c# – 将用户输入的搜索查询转换为用于SQL Server全文搜索的where子句、centos 7下源码编译安装php支持PostgreSQL postgresql手册 postgresql官网下载 postgresql视频教、database – 将SQLITE SQL转储文件转换为POSTGRESQL、Distributed PostgreSQL on a Google Spanner Architecture – Query Layer。
本文目录一览:- php – 将Google搜索查询转换为PostgreSQL“tsquery”(google搜索引擎网址格式)
- c# – 将用户输入的搜索查询转换为用于SQL Server全文搜索的where子句
- centos 7下源码编译安装php支持PostgreSQL postgresql手册 postgresql官网下载 postgresql视频教
- database – 将SQLITE SQL转储文件转换为POSTGRESQL
- Distributed PostgreSQL on a Google Spanner Architecture – Query Layer
php – 将Google搜索查询转换为PostgreSQL“tsquery”(google搜索引擎网址格式)
如何将Google搜索查询转换为可以提供Postgresql的to_tsquery()的内容?
如果那里没有现有的库,我应该如何用PHP这样的语言解析Google搜索查询呢?
例如,我想采取以下Google-ish搜索查询:
("used cars" OR "new cars") -ford -mistubishi
并将其转换为to_tsquery() – 友好字符串:
('used cars' | 'new cars') & !ford & !mistubishi
我可以用正则表达式来捏造这个,但这是我能做的最好的.是否有一些强有力的词汇分析方法来解决这个问题?我希望能够支持扩展搜索运算符(如Google的site:和intitle :),它们将应用于不同的数据库字段,因此需要与tsquery字符串分开.
更新:我意识到,对于特殊运算符,这将成为Google到sql WHERE子句的转换,而不是Google到tsquery的转换.但是WHERE子句可能包含一个或多个tsqueries.
例如,Google风格的查询:
((color:blue OR "4x4") OR style:coupe) -color:red used
应该产生这样的sql WHERE子句:
WHERE to_tsvector(description) MATCH to_tsquery('used')
AND color <> 'red'
AND ( (color = 'blue' OR to_tsvector(description) MATCH to_tsquery('4x4') )
OR);
我不确定正则表达式是否可以实现上述目标?
解决方法:
老实说,我认为正则表达式是这样的.同样,这是一个有趣的练习.下面的代码是非常原型的 – 事实上,你会发现我甚至没有实现lexer本身 – 我只是伪造了输出.我想继续,但我今天没有更多的业余时间.
此外,在支持其他类型的搜索操作符等方面,肯定还有很多工作要做.
基本上,我们的想法是将某种类型的查询缩小,然后解析为通用格式(在本例中为QueryExpression实例),然后将其作为另一种类型的查询呈现回来.
<?PHP
ini_set( "display_errors", "on" );
error_reporting( E_ALL );
interface ILexer
{
public function execute( $str );
public function getTokens();
}
interface IParser
{
public function __construct( iLexer $lexer );
public function parse( $input );
public function addToken( $token );
}
class GoogleQueryLexer implements ILexer
{
private $tokenStack = array();
public function execute( $str )
{
$chars = str_split( $str );
foreach ( $chars as $char )
{
// add to self::$tokenStack per your rules
}
//'("used cars" OR "new cars") -ford -mistubishi'
$this->tokenStack = array(
'('
, 'used cars'
, 'or new cars'
, ')'
, '-ford'
, '-mitsubishi'
);
}
public function getTokens()
{
return $this->tokenStack;
}
}
class GoogleQueryParser implements IParser
{
protected $lexer;
public function __construct( iLexer $lexer )
{
$this->lexer = $lexer;
}
public function addToken( $token )
{
$this->tokenStack[] = $token;
}
public function parse( $input )
{
$this->lexer->execute( $input );
$tokens = $this->lexer->getTokens();
$expression = new QueryExpression();
foreach ( $tokens as $token )
{
$expression = $this->processtoken( $token, $expression );
}
return $expression;
}
protected function processtoken( $token, QueryExpression $expression )
{
switch ( $token )
{
case '(':
return $expression->initiateSubExpression();
break;
case ')':
return $expression->getParentExpression();
break;
default:
$modifier = $token[0];
$phrase = substr( $token, 1 );
switch ( $modifier )
{
case '-':
$expression->addExclusionPhrase( $phrase );
break;
case '+':
$expression->addPhrase( $phrase );
break;
default:
$operator = trim( substr( $token, 0, strpos( $token, ' ' ) ) );
$phrase = trim( substr( $token, strpos( $token, ' ' ) ) );
switch ( strtolower( $operator ) )
{
case 'and':
$expression->addAndPhrase( $phrase );
break;
case 'or':
$expression->addOrPhrase( $phrase );
break;
default:
$expression->addPhrase( $token );
}
}
}
return $expression;
}
}
class QueryExpression
{
protected $phrases = array();
protected $subExpressions = array();
protected $parent;
public function __construct( $parent=null )
{
$this->parent = $parent;
}
public function initiateSubExpression()
{
$expression = new self( $this );
$this->subExpressions[] = $expression;
return $expression;
}
public function getPhrases()
{
return $this->phrases;
}
public function getSubExpressions()
{
return $this->subExpressions;
}
public function getParentExpression()
{
return $this->parent;
}
protected function addQueryPhrase( QueryPhrase $phrase )
{
$this->phrases[] = $phrase;
}
public function addPhrase( $input )
{
$this->addQueryPhrase( new QueryPhrase( $input ) );
}
public function addOrPhrase( $input )
{
$this->addQueryPhrase( new QueryPhrase( $input, QueryPhrase::MODE_OR ) );
}
public function addAndPhrase( $input )
{
$this->addQueryPhrase( new QueryPhrase( $input, QueryPhrase::MODE_AND ) );
}
public function addExclusionPhrase( $input )
{
$this->addQueryPhrase( new QueryPhrase( $input, QueryPhrase::MODE_EXCLUDE ) );
}
}
class QueryPhrase
{
const MODE_DEFAULT = 1;
const MODE_OR = 2;
const MODE_AND = 3;
const MODE_EXCLUDE = 4;
protected $phrase;
protected $mode;
public function __construct( $input, $mode=self::MODE_DEFAULT )
{
$this->phrase = $input;
$this->mode = $mode;
}
public function getMode()
{
return $this->mode;
}
public function __toString()
{
return $this->phrase;
}
}
class TsqueryBuilder
{
protected $expression;
protected $query;
public function __construct( QueryExpression $expression )
{
$this->query = trim( $this->processExpression( $expression ), ' &|' );
}
public function getResult()
{
return $this->query;
}
protected function processExpression( QueryExpression $expression )
{
$query = '';
$phrases = $expression->getPhrases();
$subExpressions = $expression->getSubExpressions();
foreach ( $phrases as $phrase )
{
$format = "'%s' ";
switch ( $phrase->getMode() )
{
case QueryPhrase::MODE_AND :
$format = "& '%s' ";
break;
case QueryPhrase::MODE_OR :
$format = "| '%s' ";
break;
case QueryPhrase::MODE_EXCLUDE :
$format = "& !'%s' ";
break;
}
$query .= sprintf( $format, str_replace( "'", "\\'", $phrase ) );
}
foreach ( $subExpressions as $subExpression )
{
$query .= "& (" . $this->processExpression( $subExpression ) . ")";
}
return $query;
}
}
$parser = new GoogleQueryParser( new GoogleQueryLexer() );
$queryBuilder = new TsqueryBuilder( $parser->parse( '("used cars" OR "new cars") -ford -mistubishi' ) );
echo $queryBuilder->getResult();
c# – 将用户输入的搜索查询转换为用于SQL Server全文搜索的where子句
+"e-mail" +attachment -"word document" -"e-learning"
应该翻译成如下:
SELECT * FROM MyTable WHERE (CONTAINS(*,'"e-mail"')) AND (CONTAINS(*,'"attachment"')) AND (NOT CONTAINS(*,'"word document"')) AND (NOT CONTAINS(*,'"e-learning"'))
我正在使用query parser class,它使用正则表达式解析用户输入的查询令牌,然后从令牌构造where子句.
然而,鉴于这可能是许多使用全文搜索的系统的常见要求,我很好奇其他开发人员如何处理此问题,以及是否有更好的处理方式.
解决方法
http://www.sqlservercentral.com/articles/Full-Text+Search+(2008)/64248/
centos 7下源码编译安装php支持PostgreSQL postgresql手册 postgresql官网下载 postgresql视频教
1. 下载源码
$ mkdir /usr/downloads $ wget -c http://cn2.php.net/distributions/php-5.6.20.tar.gz $ tar -xvf php-5.6.20.tar.gz $ mv php-5.6.20 /usr/local/src $ cd !$ & cd php-5.6.20
2. 阅读安装指导
$ ls -also $ less README $ less INSTALL
3. 安装依赖包
$ yum install apr apr-util apr-devel apr-util-devel prce lynx
4. 安装httpd
$ wget -c http://apache.fayea.com//httpd/httpd-2.4.20.tar.gz $ tar -xvf httpd-2.4.20.tar.gz $ cd httpd-2.4.20 $ ./configure \ --prefix=/usr/local/programs/apache2 \ --enable-rewrite \ --enable-so \ --enable-headers \ --enable-expires \ --with-mpm=worker \ --enable-modules=most \ --enable-deflate \ --enable-module=shared $ make $ make install $ cd /usr/local/programs/apache2 $ cp bin/apachectl /etc/init.d/httpd ## 复制启动脚本 $ /etc/init.d/httpd start ## 启动apache服务器,访问http://localhost/ $ egrep -v ''^[ ]*#|^$'' /usr/local/apache2/conf/httpd.conf | nl ## 查看apache服务器的配置 ## 将apache加入系统服务 vi /etc/rc.d/rc.local ``` /usr/local/programs/apache2/bin/apachectl start ``` $ cat /etc/rc.local
4. 安装postgresql
立即学习“PHP免费学习笔记(深入)”;
$ yum install readline-devel ## 安装readline依赖 $ cd /usr/downloads $ wget -c https://ftp.postgresql.org/pub/source/v9.5.0/postgresql-9.5.0.tar.bz2 $ tar -xvf postgresql-9.5.0.tar.bz2 $ cd postgresql-9.5.0 $ ./configure --prefix=/usr/local/programs/postgresql $ make $ su $ make install $ /sbin/ldconfig /usr/local/programs/postgresql/lib ## 刷新下共享动态库 $ cd /usr/local/programs/postgresql $ bin/psql --version ## 检查运行情况 ## 开始对postgresql的配置 $ vi /etc/profile.d/postgresql.sh ## 增加环境变量,不推荐直接在/etc/profile中添加,系统更新升级时会需要merge ``` PATH=/usr/local/programs/postgresql:$PATH export PATH ``` $ source /etc/profile ## 更新环境变量 ## 增加用户和其他文件夹 $ adduser postgres $ passwd postgres $ mkdir /usr/local/programs/postgresql/logs $ mkdir /usr/local/programs/postgresql/data $ chown postgres /usr/local/programs/postgresql/data $ su - postgres ## 初始化数据库 $ ./bin/initdb -D ./data $ ./bin/createdb test $ ./bin/psql test ## 已有数据库,可导入data文件夹后尝试root访问,假如带密码,可能需要进一步研究下 $ ./bin/postgres -D ./data >./logs/start-log-1.log 2>&1 & $ ./bin/psql --list ##列出数据库 ## ok,安装完成 ## 自定义设置,权限控制等,可以跳过,等熟悉使用后再做 ## 编辑数据库配置及权限文件: $ vi /usr/local/programs/postgresql/data/postgresql.conf ## 数据库配置文件 $ chown postgres postgresql.conf $ chmod 644 postgresql.conf $ vi /usr/local/programs/postgresql/data/pg_hba.conf ## 权限文件 $ vi /usr/local/programs/postgresql/data/pg_ident.conf ## 设置开机自启动: $ vi /etc/rc.d/rc.local ## 添加如下内容 ``` /usr/local/programs/postgresql/bin/postgresql start ```
5. 安装php
## 源码已经在第一步中下载,现在开始安装: $ yum install libxml2 libxml2-devel libpng libpng-devel libjpeg libjpeg-devel freetype freetype-devel $ ./configure \ --prefix=/usr/local/programs/php \ --with-apxs2=/usr/local/programs/apache2/bin/apxs \ --with-zlib \ --with-gd \ --with-jpeg-dir \ --with-png-dir \ --with-freetype-dir \ --with-zlib-dir \ --enable-mbstring \ --with-pgsql=/usr/local/programs/postgresql \ --with-pdo-pgsql=/usr/local/programs/postgresql $ make $ make test > Bug #42718 (unsafe_raw filter not applied when configured as default filter) [ext/filter/tests/bug42718.phpt] XFAIL REASON: FILTER_UNSAFE_RAW not applied when configured as default filter, even with flags > Bug #67296 (filter_input doesn''t validate variables) [ext/filter/tests/bug49184.phpt] XFAIL REASON: See Bug #49184 > Bug #53640 (XBM images require width to be multiple of 8) [ext/gd/tests/bug53640.phpt] XFAIL REASON: Padding is not implemented yet > zend multibyte (7) [ext/mbstring/tests/zend_multibyte-07.phpt] XFAIL REASON: https://bugs.php.net/bug.php?id=66582 > zend multibyte (9) [ext/mbstring/tests/zend_multibyte-09.phpt] XFAIL REASON: https://bugs.php.net/bug.php?id=66582 >Bug #70470 (Built-in server truncates headers spanning over TCP packets) [sapi/cli/tests/bug70470.phpt] XFAIL REASON: bug is not fixed yet ## 查阅官方的bug,发现: > id=66582: status : Closed. Fixed in master (PHP7) > id=42718: status : Assigned > id=42718: reference to id=49184, unsolved for many years ## 那就不关心了,直接装吧 $ make install > You may want to add: /usr/local/programs/php/lib/php to your php.ini include_path ## 那就按它说的设置吧 $ cp php.ini-development /usr/local/programs/php/lib/php.ini ``` include_path = ".;/usr/local/programs/php/lib/php" ## 然后,编辑httpd的设置,确保其能正确解析php文件 ``` ... LoadModule php5_module modules/libphp5.so ... AddType application/x-httpd-php .php AddType application/x-httpd-php-source .php5 ... <ifmodule dir_module> DirectoryIndex index.html index.php </ifmodule> ``` ## 重启httpd,测试 $ cd /usr/local/programs/apache2 $ bin/httpd -h $ bin/httpd -k stop $ bin/httpd -f conf/httpd.conf ## 默认设置的www页面在./htdocs/下,那就先去里面建一个测试页面吧 $ vi htdocs/index.php ``` <?php phpinfo(); ?> ``` $ curl http://localhost/index.php |grep postgresql #ok
后续应该做的事
* 1. 启动时,不需要要手动指定配置文件
* 2. php初始化www目录设置
* 3. php 用户、权限管理等
以上就介绍了centos 7下源码编译安装php支持PostgreSQL,包括了postgresql,centos 7方面的内容,希望对PHP教程有兴趣的朋友有所帮助。
database – 将SQLITE SQL转储文件转换为POSTGRESQL
基于运行sqlite数据库.dump> /the/path/to/sqlite-dumpfile.sql,sqlITE以以下格式输出表转储:
BEGIN TRANSACTION; CREATE TABLE "courses_school" ("id" integer PRIMARY KEY,"department_count" integer NOT NULL DEFAULT 0,"the_id" integer UNIQUE,"school_name" varchar(150),"slug" varchar(50)); INSERT INTO "courses_school" VALUES(1,168,213,'TEST Name A',NULL); INSERT INTO "courses_school" VALUES(2,656,'TEST Name B',NULL); .... COMMIT;
如何将上述转换为POSTGREsql兼容的转储文件,我可以导入到我的生产服务器?
/path/to/psql -d database -U username -W < /the/path/to/sqlite-dumpfile.sql
如果希望id列为“自动递增”,则在表创建行中将其类型从“int”更改为“serial”。 Postgresql然后将一个序列附加到该列,以便具有NULL ID的INSERT将被自动分配下一个可用的值。 Postgresql也不会识别AUTOINCREMENT命令,所以这些需要删除。
您还需要检查sqlite模式中的datetime列,并将它们更改为Postgresql的时间戳(感谢Clay指出这一点)。
如果你的sqlite中有布尔值,那么你可以转换1和0和1 :: boolean和0 :: boolean(分别)或者你可以在转储的模式部分中将布尔值更改为一个整数,然后将它们修复手里面Postgresql后导入。
如果你的sqlite中有BLOB,那么你需要调整模式以使用bytea。你可能需要在一些decode
calls as well混合。用你最喜欢的语言编写一个快速的不脏的复印机可能比修整sql更容易,如果你有很多BLOB要处理。
像往常一样,如果你有外键,那么你可能想调查set constraints all deferred
以避免插入顺序问题,将命令放在BEGIN / COMMIT对中。
感谢Nicolas Riley的布尔,blob和约束注释。
如果你的代码,由一些sqlite3客户端生成,你需要删除它们。
PostGREsql也不能识别未签名的列,您可能想要删除它,或添加一个自定义的约束,如:
CREATE TABLE tablename ( ... unsigned_column_name integer CHECK (unsigned_column_name > 0) );
虽然sqlite默认为null值为“,Postgresql要求将它们设置为NULL。
sqlite转储文件中的语法似乎与Postgresql大部分兼容,所以你可以补丁几个东西,并将其提供给psql。通过sql INSERT导入大量数据可能需要一段时间,但它会工作。
Distributed PostgreSQL on a Google Spanner Architecture – Query Layer
转自:https://blog.yugabyte.com/distributed-postgresql-on-a-google-spanner-architecture-query-layer/
Our previous post dived into the details of the storage layer of YugaByte DB called DocDB, a distributed document store inspired by Google Spanner. This post focuses on YugaByte SQL (YSQL), a distributed, highly resilient, PostgreSQL-compatible SQL API layer powered by DocDB. A follow-up post will highlight the challenges faced and lessons learned when engineering such a database.
YSQL, Distributed PostgreSQL Made Real
YugaByte SQL (YSQL) is a distributed and highly resilient SQL layer, running across multiple nodes. It is compatible with the SQL dialect and wire protocol of PostgreSQL. This means that developers familiar with PostgreSQL can fully reuse their knowledge (and the standard PostgreSQL client drivers) to build an application powered by YSQL.
YSQL essentially transforms the monolithic PostgreSQL database into a DocDB-powered distributed database. To accomplish this, it reuses open source PostgreSQL’s query layer (written in C) as much as possible.
Following were the design goals we set for YSQL early on.
- Reuse the open source, mature and feature-rich PostgreSQL query layer
- Preserve existing PostgreSQL functionality and extend as necessary
- Enable migrations to newer versions of PostgreSQL by implementing features in a modular approach
Relentless execution towards the above goals has paid rich dividends. YSQL now supports a wider range of existing PostgreSQL functionality than we had originally expected. This is evident from the v1.2 feature matrix, examples being:
- DDL statements:
CREATE, DROP
andTRUNCATE
tables - Data types: All primitive types including numeric types (integers and floats), text data types, byte arrays, date-time types,
UUID, SERIAL
, as well asJSONB
- DML statements: Most statements such as
INSERT, UPDATE, SELECT
andDELETE
. Bulk of core SQL functionality now supported includesJOINs, WHERE
clauses,GROUP BY, ORDER BY, LIMIT, OFFSET
andSEQUENCES
- Transactions:
ABORT, ROLLBACK, BEGIN, END
, andCOMMIT
- Expressions: Rich set of PostgreSQL built-in functions and operators
- Other Features:
VIEWs, EXPLAIN, PREPARE-BIND-EXECUTE
, and JDBC support
As for the design goal of migrating to newer versions, YSQL started with the PostgreSQL v10.4 and recently rebased to PostgreSQL v11.2 in a matter of weeks!
How YSQL Works?
YSQL internals can be categorized into four distinct areas:
- System catalog management
- User table management
- The read and write IO Path
- Mapping SQL tables to a document store
The next sections detail each of the above areas. Before diving into the details, here’s a quick recap of DocDB from the first post of this series.
- Every table in DocDB has the same schema: one key maps to one document.
- As a distributed database, it replicates data on each write.
- Offers single-key linearizability and multi-key snapshot isolation (serializable isolation is in the works).
- Native support for secondary indexes on any document attribute.
- Efficient querying and updating a subset of attributes of any document.
System Catalog Management
The PostgreSQL documentation on system catalogs says that the system catalogs are regular tables where schema metadata is stored, such as information about tables and columns, and internal bookkeeping information. The initdb code path in PostgreSQL, which is completely different from the code path the deals with user tables, creates and initializes system catalog tables. So, in order to make a distributed SQL database with no single points of failure, it is essential to replicate these system catalogs.
1. Initialize system catalog through initdb
When YSQL starts up for the first time, a modified initdb executes and creates the system catalog a replicated, single-tablet system catalog table in DocDB. This is shown in the figure above.
The system catalog tablets in DocDB forms a Raft group, which replicates data onto a set of nodes and can tolerate failures. In the figure above, the system catalog tablet leader is shown with a solid border while the followers are shown with a dotted border. This ensures that PostgreSQL can still rely on the familiar system catalog in order to function.
2. Ready to serve apps
Once the system catalogs are created, YSQL can be used by applications. Since the data is replicated across nodes and persisted on disk, initdb
is not needed on subsequent restarts of the cluster.
User Table Management
Now that the YSQL cluster is up and running, let us consider the scenario when a user creates a table. This happens in the following four steps.
1. Parse and analyze the query
Just as with PostgreSQL, the query is received by PostgreSQL server process – which parses, analyzes and executes the query.
2. Route query to tablet leader of DocDB system catalog
In the case of a regular PostgreSQL, the execution phase would add entries to the system catalog tables and create some directories and files on the local filesystem. In the case of YSQL, this update to the system catalog is sent to the tablet leader of the distributed system catalog table in DocDB.
3. Replicate system catalog entry across nodes in DocDB
The tablet leader of the distributed system catalog table in DocDB is responsible replicating the update to the followers. This is done using Raft consensus, which ensures that the update is linearizable even in the presence of faults.
4. Create user table in DocDB
Now that the entry has been persisted in the system catalog, the next step of the execution phase is to create a distributed DocDB table. This involves creating a number of tablets (which have replicas) across a set of nodes. This is shown in the diagram below.
Once the above steps are complete, the table is ready to use.
Read/Write IO Path
The read and write IO paths are quite similar. Let us understand the write IO path, which involves replication of data in DocDB. The read IO path is similar, except for the last step which can serve data directly from the leader of the tablet in DocDB.
1. Parse and analyze the query
Just as with PostgreSQL, the PostgreSQL server process receives the query. It then goes through the parser, analyzer, planner and the executor. Some of the planning, analysis and execution steps, however, are different to accommodate a distributed database instead of the local store.
2. Route the insert to the tablet leader
The SQL insert statement may end up updating a single row or multiple rows. Although DocDB can handle both cases natively, these two cases are detected and handled differently to improve the performance of YSQL. Single row inserts are routed directly to the tablet leader that owns the primary key of that row. Inserts affecting multiple rows are sent to a global transaction manager which performs a distributed transaction. The single-row insert case is shown below.
3. Replicate the write through Raft
In the of single-row inserts, the tablet leader replicates the data using the Raft protocol onto the followers. This simpler case is shown below. In the case of multi-row inserts, the global transaction manager writes multiple records (transaction status records, provisional records, etc) across tablets (often on different nodes). Each of these writes are replicated using Raft consensus. The hybrid logical clock or HLC tracking in the cluster serves as a coarsely synchronized, highly available global clock to coordinate writes. This results in the writes being fault tolerant, with a high-performance system.
Mapping SQL Tables to Documents
Each user table in YSQL maps to a corresponding DocDB table with multiple tablets. The YSQL tables come with their own schemas, while all the DocDB tables have the same schema, which is shown below. The actual schema enforcement is done using table schema metadata.
1
|
DocKey → { Document Value }
|
The combined set of primary key column values are used to construct the DocKey
above. Each of the value columns (non-primary key columns) are mapped to one attribute in the Document Value
above.
The various YSQL constructs are mapped to suitable DocDB equivalents. This is shown in the table below.
So how does this look in practice? Let us take an example. Consider the following rather simple table.
1
2
3
4
5
6
|
CREATE TABLE msgs (
user_id INT,
msg_id INT,
subject TEXT
msg TEXT,
PRIMARY KEY (user_id, msg_id);
|
This will correspond to a DocDB table that has a document key to value schema. Now, lets us perform the following insert at time T1.
1
2
|
T1: INSERT INTO msgs (user_id, msg_id, subject, msg)
VALUES (''user1'', 10, ''hello'', ''hello world'');
|
This will get translated into the following entries in the DocDB table.
1
2
3
4
5
|
DocKey (''user1'', 10):
{
column_id (subject), T1 -> ''hello'',
column_id (msg), T1 -> ''hello world''
}
|
YSQL Benefits
A YSQL cluster appears as a single logical PostgreSQL database to applications. All nodes in the YSQL layer are identical and application clients can connect to any node in order to read or write data. Along with maximum PostgreSQL compatibility, such an architecture delivers a number of benefits.
Horizontal Write Scalability
Since DocDB is capable of being scaled out on demand, a stateless YSQL tier makes it easy to add nodes on demand. This enables rapid scaling of the cluster when more resources (CPU, memory, storage capacity) are required.
Highly Resilient w/ Native Failover & Repair
The underlying DocDB cluster is fault-tolerant. This means that node failures do not affect the SQL application using this distributed SQL database. It simply starts communicating to a new node as opposed to native PostgreSQL where the common approach of master-slave replication inevitably leads to manual failover and/or inability to serve recent commits.
Geo-Distribution w/ Multi-Region Deployments
DocDB supports geo-distributed deployments, meaning you can deploy a distributed SQL database across different geographic regions and zone.
Cloud Native Operations
DocDB allows dynamically changing nodes of the database with no app impact. Schema changes as well as infrastructure migrations are now zero downtime, even for a SQL database.
Summary
Bringing together two iconic database technologies such as Spanner and PostgreSQL into a new open source, cloud native database has been an immensely satisfying engineering achievement. However, we understand that a well-engineered database on its own right does not build trust in the minds of developers and architects. We have to earn that trust using the traditional means of communication, collaboration and sharing of success stories.
Through this series of posts, we explain our design principles, the tradeoffs associated with those principles, the actual implementation details and finally, the lessons learned especially around some of the more challenging aspects. We intend to prove our claims through exhaustive correctness testing (such as Jepsen) as well as comprehensive performance benchmarking (including TPCC). As we make rapid progress towards YSQL GA this summer, we are working closely with a few of our current users to highlight how YSQL can complement their existing investment in YugaByte DB. If your project can benefit from YSQL as well, don’t hesitate to reach us on our community Slack channel.
What’s Next?
- Compare YugaByte DB in depth to databases like CockroachDB, Google Cloud Spanner and MongoDB.
- Get started with YugaByte DB on macOS, Linux, Docker and Kubernetes.
- Contact us to learn more about licensing, pricing or to schedule a technical overview.
关于php – 将Google搜索查询转换为PostgreSQL“tsquery”和google搜索引擎网址格式的介绍已经告一段落,感谢您的耐心阅读,如果想了解更多关于c# – 将用户输入的搜索查询转换为用于SQL Server全文搜索的where子句、centos 7下源码编译安装php支持PostgreSQL postgresql手册 postgresql官网下载 postgresql视频教、database – 将SQLITE SQL转储文件转换为POSTGRESQL、Distributed PostgreSQL on a Google Spanner Architecture – Query Layer的相关信息,请在本站寻找。
本文标签: