在这篇文章中,我们将为您详细介绍build立连接后需要调用DisposeLocalCopyOfClientHandle的内容,并且讨论关于的相关问题。此外,我们还会涉及一些关于(inFedora31)
在这篇文章中,我们将为您详细介绍build立连接后需要调用DisposeLocalCopyOfClientHandle的内容,并且讨论关于的相关问题。此外,我们还会涉及一些关于(in Fedora 31) building AOSP 9 - flex-2.5.39: loadlocale.c:130: _nl_intern_locale_data: failed、A Guide to Blocks & Grand Central Dispatch (and the Cocoa API's making use of them)、android – AlertDialog.builder.setView和Dialog.setContentView有什么区别?、android – 何时调用dispose并清除CompositeDisposable的知识,以帮助您更全面地了解这个主题。
本文目录一览:- build立连接后需要调用DisposeLocalCopyOfClientHandle()(build connections with)
- (in Fedora 31) building AOSP 9 - flex-2.5.39: loadlocale.c:130: _nl_intern_locale_data: failed
- A Guide to Blocks & Grand Central Dispatch (and the Cocoa API's making use of them)
- android – AlertDialog.builder.setView和Dialog.setContentView有什么区别?
- android – 何时调用dispose并清除CompositeDisposable
build立连接后需要调用DisposeLocalCopyOfClientHandle()(build connections with)
与匿名pipe道build立连接后的步骤需要服务器调用disposeLocalcopyOfClientHandle 。 MSDN解释说:
客户端句柄传递给客户端后,应该调用disposeLocalcopyOfClientHandle方法。 如果未调用此方法,则当客户端处理其Pipestream对象时,AnonymousPipeServerStream对象将不会收到通知。
试图理解客户端closures时服务器为什么不被注意,我继续看参考源的disposeLocalcopyOfClientHandle :
// This method is an annoying one but it has to exist at least until we make passing handles between // processes first class. We need this because once the child handle is inherited,the OS considers // the parent and child''s handles to be different. Therefore,if a child closes its handle,our // Read/Write methods won''t throw because the OS will think that there is still a child handle around // that can still Write/Read to/from the other end of the pipe. // // Ideally,we would want the Process class to close this handle after it has been inherited. See // the pipe spec future features section for more information. // // Right Now,this is the best signal to set the anonymous pipe as connected; if this is called,we // kNow the client has been passed the handle and so the connection is live. [System.Security.SecurityCritical] public void disposeLocalcopyOfClientHandle() { if (m_clientHandle != null && !m_clientHandle.IsClosed) { m_clientHandle.dispose(); } }
这句话让我困惑不已:
AttributeError:StringIO实例没有属性''encoding''
从二进制文件中读取2的补码(32位整数)
在任务pipe理器的应用程序标签中设置应用程序
Typescript不会更新到最新版本(NPM)
如何用文件中的testing数据预填充模型对象?
once the child handle is inherited,the OS considers the parent and child''s handles to be different.
不是父对象的句柄和孩子的句柄(即,服务器的m_handle和传递给孩子的服务器的 m_clientHandle )与第一个地方不同吗? 这里的“不同”是指“引用不同的对象”(这是我理解它的方式)还是还有其他含义?
整个数组的值被for循环中的每个迭代覆盖
如何覆盖我当前正在运行的可执行文件?
JAR文件在Windows启动时由C ++应用程序启动时不会运行
从iOS(iPad)到Windows的端口游戏
让工作区可以点击以在其上启动Eclipse?
你的困惑源于服务器和客户端也是父进程和子进程的事实。 管道手柄是服务器或客户端,但可以在父母和子女中存在。 一会儿,在服务器产生了客户端之后,但在disposeLocalcopyOfClientHandle之前, 三个句柄正在播放:
在服务器(父)进程中的管道的服务器句柄。
服务器(父)进程中的管道客户端句柄。
客户端(子)进程中的管道的客户端句柄从父进程继承。
第二个句柄需要在孩子启动和运行之后关闭,因为正如注释所解释的那样,在所有客户句柄关闭之前,管道仍然可用。 如果第二个手柄粘住,则会阻止服务器检测到子进程已完成。
而不是使用继承,实现也可以产生子进程并使用DuplicateHandle ,由于原来的句柄可以被立即关闭,所以不需要这个帮助方法。 这可能是“在第一流程之间传递句柄”的意思。
在.NET中很难看清楚的细节是CreateProcess()的bInheritHandles参数,这是一个在winapi中爬行的令人讨厌的小unixism。 确定这个观点的正确价值是非常困难的,因为你必须了解很多关于你开始的过程,而且这个过程真的很差,这是一个全有或全无的选择。 Raymond Chen有一篇博客文章 ,谈论丑角的案例,以及他们在Windows 6.0中如何解决这个问题。
否则,可以在.NET中使用的解决方案。 主要是因为它仍然支持.NET 4.5以前的Windows版本。 这将是相当困难的使用。 因此,processstartinfo类没有属性允许您显式控制bInheritHandles参数值,Process.Start()总是传递TRUE。 这是什么“直到我们在流程一级之间传递句柄”的意思。
进一步的细节是,子进程继承的句柄是一个独立的句柄,与父进程的句柄不同。 所以总共需要两个 CloseHandle调用来销毁系统对象。 或者换句话说,父母和孩子都需要停止使用该对象。 这就是“操作系统认为父母和孩子的手柄不同”的意思。
用于创建匿名管道的底层CreatePipe()winapi函数返回两个句柄,一个读取,一个写入。 根据管道的方向,父级(aka服务器)应该使用一个,子进程(又名客户机)使用一个。 这些句柄是可继承的句柄,因此在启动子进程后,需要总共四次 CloseHandle调用来销毁该管道对象。
这是不愉快的。 .NET包装可以做一些关于服务器句柄。 它调用DuplicateHandle()来创建服务器端句柄的副本,为bInheritHandle参数传递FALSE。 然后关闭原来的句柄。 好的,子进程将不再继承服务器端句柄,所以现在只需要三个 CloseHandle调用。
但是,相同的技巧不能用于子进程需要使用的管道句柄。 毕竟,它的目的是为了继承这个句柄,这样它就可以和服务器通话了。 这就是为什么你必须明确地做, 因为你知道子进程已经正确启动。 在您的disposeLocalcopyOfClientHandle()方法调用之后,只需要两个 CloseHandle调用。
客户端的CloseHandle调用很简单,通过在AnonymousPipeClientStream上调用Close或dispose来实现。 或者通过一个未处理的异常来崩溃进程,然后OS负责关闭句柄。 现在只剩下一个 CloseHandle调用了。
一个去,在服务器端更难。 它只知道关闭/处置它的AnonymousPipeserverStream,当它得到子进程不再使用它的“通知”。 可怕的引用“通知”,没有事件告诉你这一点。 正确的方法是让子进程发送一个明确的“再见”消息,以便服务器知道调用Close。 不正常但并不少见的方式是孩子没有很好的说再见,然后服务器只能知道它不在继续使用管道的异常。
哪一个是关键,只有当操作系统看到服务器试图使用管道,而另一端没有剩余的句柄时,才会得到异常。 换句话说,如果你忘记调用disposeLocalcopyOfClientHandle(),那么你不会得到异常。 不好。
总结
以上是小编为你收集整理的build立连接后需要调用DisposeLocalCopyOfClientHandle()全部内容。
如果觉得小编网站内容还不错,欢迎将小编网站推荐给好友。
(in Fedora 31) building AOSP 9 - flex-2.5.39: loadlocale.c:130: _nl_intern_locale_data: failed
# 错误
flex-2.5.39: loadlocale.c:130: _nl_intern_locale_data: Assertion ''cnt < (sizeof (_nl_value_type_LC_TIME) / sizeof (_nl_value_type_LC_TIME[0]))'' failed.
/bin/sh: line 1: 58421 Aborted (core dumped) /mnt/android/Android-x86---pie-x86---9.0-rc1/prebuilts/misc/linux-x86/flex/flex-2.5.39 -oscripts/kconfig/zconf.lex.c -L /mnt/android/Android-x86---pie-x86---9.0-rc1/kernel/scripts/kconfig/zconf.l
# 解决
rm prebuilts/misc/linux-x86/flex/flex-2.5.39
ln -s /usr/bin/flex prebuilts/misc/linux-x86/flex/flex-2.5.39
# 根据上面修改,又出现如下错误。 解决方法:rebuild the included flex,如下步骤
out/soong/.intermediates/frameworks/compile/mclinker/lib/Script/libmcldScript/android_x86_64_core_static/gen/lex/frameworks/compile/mclinker/lib/Script/ScriptScanner.cpp:1487:8: error: member reference type ''std::istream *'' (aka ''basic_istream<char> *'') is a pointer; did you mean to use ''->''?
yyin.rdbuf(std::cin.rdbuf());
out/soong/.intermediates/frameworks/compile/mclinker/lib/Script/libmcldScript/android_x86_64_core_static/gen/lex/frameworks/compile/mclinker/lib/Script/ScriptScanner.cpp:1490:9: error: member reference type ''std::ostream *'' (aka ''basic_ostream<char> *'') is a pointer; did you mean to use ''->''?
yyout.rdbuf(std::cout.rdbuf());
# 接着,rm prebuilts/misc/linux-x86/flex/flex-2.5.39
# rebuild the included flex
cd prebuilts/misc/linux-x86/flex
rm flex-2.5.39
tar zxf flex-2.5.39.tar.gz
cd flex-2.5.39
./configure
make
mv flex ..
cd ..
rm flex-2.5.39 -rf
mv flex flex-2.5.39
cd /mnt/android/Android-x86---pie-x86---9.0-rc1
m -j12 iso_img
A Guide to Blocks & Grand Central Dispatch (and the Cocoa API's making use of them)
Intro As you may or may not kNow I recently did a talk at the Des Moines Cocoaheads in which I reviewed Blocks and Grand Central dispatch. I have tried to capture the content of that talk and a lot more here in this article. The talk encompassed
- Blocks
- Grand Central dispatch
- GCD Design Patterns
- Cocoa API''s using GCD and Blocks
All of the content of this article applies only to Mac OS X 10.6 SNow Leopard as blocks support and Grand Central dispatch are only available there. There are alternate methods to get blocks onto Mac OS X 10.5 and the iPhone OS via projects likePlausible Blocks which have blocks support,though do not have Grand Central dispatch (libdispatch.)
Grand Central dispatch is Open Source I should mention that Apple has in fact Open Sourced libdispatch (Grand Central Dispatch) on Mac OS Forge and the other components like Kernel Support for GCD (although if implemented on other OS''s this is not necessary) and the blocks runtime support are all freely available and if you want you can even checkout the libdispatch repository using Git with the command git clone git://git.macosforge.org/libdispatch.git
Blocks
Blocks are part of a new C Language Extension,and are available in C,Objective-C,C++ and Objective-C++. Right off the bat,I should say that while we will use blocks with Grand Central dispatch a lot later on,they are not required when using Grand Central dispatch. Blocks are very useful onto themselves even if you never use them with anything else. However we gain a lot of benefit when we use blocks with Grand Central dispatch,so pretty much all my examples here will use blocks.
What are blocks? Well let me show you a very basic example of one
^{ NSLog(@"Inside a block"); }
This is a very basic example of a block,but it is basically a block that accepts no arguments and contains a NSLog()
statement. Think of blocks as either a method or snippet of code that accepts arguments and captures lexical scope. Other languages have already had something like this concept implemented for a while (since the 70''s at least if I remember correctly.) Here''s a couple examples of this concept in one of my favorite languages Python
>>>f = lambda x,y,z: x + y + z ... >>> f(2,3,4) 9
Here we are defining a lambda in Python which is basically a function that we can execute later on. In Python after the lambda keyword you define the arguments that you are passing in to the left of the colon and the right is the actual expression that will get executed. So in the first line of code we''ve defined a lambda that accepts 3 arguments and when it''s invoked all it will do is accept the arguments and add them together,hence when we invoke f like f(2,4)
we get 9 back. We can do more with Python lambda''s. Python has functions that actually do more with lambdas like in this example...
>>>reduce((lambda x,y: x+y),[1,2,4]) 10 >>>reduce((lambda x,y: x*y),4]) 24
This reduce function uses a lambda that accepts 2 arguments to iterate over an array. The lambda in this case accepts 2 arguments (as the reduce function requires) and in the first example just iterates over the array with it. Python begins by calling the lambda using the first 2 elements of the array then gets a resulting value and again calls the lambda with that resulting value and the next element in the array and keeps on calling the lambda until it has fully iterated over the data set. So in other words the function is executed like so (((1 + 2) + 3) + 4)
Blocks bring this concept to C and do a lot more. You might ask yourself "But haven''t we already had this in C? I mean there are C Function Pointers." Well yeah,but while blocks are similar in concept,they do a lot more than C Function Pointers,and even better if you already kNow how to use C function pointers,Blocks should be fairly easy to pick up.
Here is a C Function Pointer Declaration...
void (*func)(void);
...and here is a Block Declaration...
void (^block)(void);
Both define a function that returns nothing (void) and takes no arguments. The only difference is that we''ve changed the name and swapped out a "*" for a "^". So lets create a basic block
int (^MyBlock)(int) = ^(int num) { return num * 3; };
The block is laid out like so. The first part signifies that it''s a block returning an int. The (^MyBlock)(int)
is defining a block of the MyBlock type that accepts an int as an argument. Then the ^(int num)
to the right of the assignment operator is the beginning of the block,it means this is a block that accepts an int as an argument (matching the declaration earlier.) Then the { return num * 3; };
is the actual body of the block that will be executed.
When we''ve defined the block as shown earlier we can then assign it to variables and pass it in as arguments like so...
int aNum = MyBlock(3); printf(“Num %i”,aNum); //9
Blocks Capturing Scope: When I said earlier that blocks capture lexical scope this is what I mean,blocks are not only useful to use as a replacement for c function pointers,but they also capture the state of any references you use within the block itself. Let me show you...
int spec = 4; int (^MyBlock)(int) = ^(int aNum){ return aNum * spec; }; spec = 0; printf("Block value is %d",MyBlock(4));
Here we''ve done a few things. First I declared an integer and assigned it a value of 4. Then we created the block and assigned it to an actual block implementation and finally called the block in a printf statement. And finally it prints out "Block value is 16"? Wait we changed the spec number to 0 just before we called it didn''t we? Well yes actually we did. But what blocks do actually is create a const copy of anything you reference in the block itself that is not passed in as an argument. So in other words we can change the variable spec to anything we want after assigning the block,but unless we are passing in the variable as an argument the block will always return 16 assuming we are calling it as MyBlock(4). I should also note that we can also use C''s typedef utility to make referencing this type of block easier. So in other words...
int spec = 4; typedef int (^MyBlock)(int); MyBlock InBlock = ^(int aNum){ return aNum * spec; }; spec = 0; printf("InBlock value is %d",InBlock(4));
is exactly equivalent to the prevIoUs code example. The difference being is that the latter is more readable.
__block Blocks do have a new storage attribute that you can affix onto variables. Lets say that in the prevIoUs example we want the block to read in our spec variable by reference so that when we do change the variable spec that our call toInBlock(4)
actually returns what we expect it to return which is 0. To do so all we need to change is adding __block to spec like so...
__block int spec = 4; typedef int (^MyBlock)(int); MyBlock InBlock = ^(int aNum){ return aNum * spec; }; spec = 0; printf("InBlock value is %d",245)"> and Now the printf statement finally spits out "InBlock value is 0",because Now it''s reading in the variable spec by reference instead of using the const copy it would otherwise use.Blocks as Objective-C objects and more! Naturally going through this you''d almost be thinking right Now that blocks are great,but they Could potentially have some problems with Objective-C,not so Blocks are Objective-C objects! They do have a isa pointer and do respond to basic commands like -copy and -release which means we can use them in Objective-C dot Syntax like so...
@property(copy) void(^myCallback)(id obj); @property(readwrite,copy) MyBlock inBlock;and in your Objective-C code you can call your blocks just like so
self.inBlock();
.Finally I should note that while debugging your code there is a new GDB command specifically for calling blocks like so
$gdb invoke-block MyBlock 12 //like MyBlock(12) $gdb invoke-block StringBlock “\” String \””These give you the ability to call your blocks and pass in arguments to them during your debug sessions.
Grand Central dispatch
Now onto Grand Central dispatch (which I may just reference as GCD from here on out.) Unlike past additions to Mac OS X like say NSOperation/NSThread Subclasses,Grand Central dispatch is not just a new abstraction around what we''ve already been using,it''s an entire new underlying mechanism that makes multithreading easier and makes it easy to be as concurrent as your code can be without worrying about the variables like how much work your cpu cores are doing,how many cpu cores you have and how much threads you should spawn in response. You just use the Grand Central dispatch API''s and it handles the work of doing the appropriate amount of work. This is also not just in Cocoa,anything running on Mac OS X 10.6 SNow Leopard can take advantage of Grand Central dispatch ( libdispatch ) because it''s included in libSystem.dylib and all you need to do is include
#import <dispatch/dispatch.h>
in your app and you''ll be able to take advantage of Grand Central dispatch.Grand Central dispatch also has some other nice benefits. I''ve mentioned this before in other talks,but in OS design there are 2 main memory spaces (kernel space and user land.) When code you call executes a syscall and digs down into the kernel you pay a time penalty for doing so. Grand Central dispatch will try and do it''s best with some of it''s API''s and by pass the kernel to return to your application without digging into the kernel which means this is very fast. However if GCD needs to it can go down into the kernel and execute the equivalent system call and return back to your application.
Lastly GCD does some things that threading solutions in Leopard and earlier did not do. For example NSOperationQueue in Leopard took in NSOperation objects and created a thread,ran the NSOperation
-(void)main
on the thread and then killed the thread and repeated the process for each NSOperation object it ran,pretty much all we did on Leopard and earlier was creating threads,running them and then killing the threads. Grand Central dispatch however has a pool of threads. When you call into GCD it will give you a thread that you run your code on and then when it''s done it will give the thread back to GCD. Additionally queues in GCD will (when they have multiple blocks to run) just keep the same thread(s) running and run multiple blocks on the thread,which gives you a nice speed boost,and only then when it has no more work to do hand the thread back to GCD. So with GCD on SNow Leopard we are getting a nice speed boost just by using it because we are reusing resources over and over again and then we we aren''t using them we just give them back to the system.This makes GCD very nice to work with,it''s very fast,efficient and light on your system. Even though GCD is fast and light however you should make sure that when you give blocks to GCD that there is enough work to do such that it''s worth it to use a thread and concurrency. You can also create as many queues as you want to match however many tasks you are doing,the only constraint is the memory available on the users system.
GCD API So if we have a basic block again like this
^{ NSLog(@"Doing something"); }then to get this running on another thread all we need to do is use
dispatch_async()
like so...dispatch_async(queue,^{ NSLog(@"Doing something"); });so where did that
queue
reference come from? Well we just need to create or get a reference to a Grand Central dispatch Queue ( dispatch_queue_t ) like thisdispatch_queue_t queue = dispatch_get_global_queue(0,0);which just in case you''ve seen this code is equivalent to
dispatch_queue_t queue = dispatch_get_global_queue(disPATCH_QUEUE_PRIORITY_DEFAULT,245)"> In Grand Central dispatch the two most basic things you''ll deal with are queues (dispatch_queue_t
) and the API''s to submit blocks to a queue such asdispatch_async()
ordispatch_sync()
and I''ll explain the difference between the two later on. For Now let''s look at the GCD Queues.The Main Queue The Main Queue in GCD is analogous to the main app thread (aka the AppKit thread.) The Main Queue cooperates with
NSApplicationMain()
to schedule blocks you submit to it to run on the main thread. This will be very handy to use later on,for Now this is how you get a handle to the main queuedispatch_queue_t main = dispatch_get_main_queue();or you Could just call get main queue inside of a dispatch call like so
dispatch_async(dispatch_get_main_queue(),^ {....The Global Queues The next type of queue in GCD are the global queues. You have 3 of them of which you can submit blocks to. The only difference to them are the priority in which blocks are dequeued. GCD defines the following priorities which help you get a reference to each of the queues...
enum { disPATCH_QUEUE_PRIORITY_HIGH = 2,disPATCH_QUEUE_PRIORITY_DEFAULT = 0,disPATCH_QUEUE_PRIORITY_LOW = -2,};When you call
dispatch_get_global_queue()
withdisPATCH_QUEUE_PRIORITY_HIGH
as the first argument you''ve got a reference to the high global queue and so on for the default and low. As I said earlier the only difference is the order in which GCD will empty the queues. By default it will go and dequeue the high priority queue''s blocks,then dequeue the default queues blocks and then the low. This priority doesn''t really have anything to do with cpu time.Private Queues Finally there are the private queues,these are your own queues that dequeue blocks serially. You can create them like so
dispatch_queue_t queue = dispatch_queue_create("com.MyApp.AppTask",NULL);The first argument to
dispatch_queue_create()
is essentially a C string which represents the label for the queue. This label is important for several reasons
- You can see it when running Debug tools on your app such as Instruments
- If your app crashes in a private queue the label will show up on the crash report
- As there are going to be lots of queues on 10.6 it''s a good idea to differentiate them
By default when you create your private queues they actually all point to the default global queue. Yes you can point these queues to other queues to make a queue hierarchy using dispatch_set_target_queue()
. The only thing Apple discourages is making an loop graph where you make a queue that points to another and another eventually winding back to pointing at the first one because that behavior is undefined. So you can create a queue and set it to the high priority queue or even any other queue like so
dispatch_queue_t queue = dispatch_queue_create("com.App.AppTask,0); dispatch_queue_t high = dispatch_get_global_queue(disPATCH_QUEUE_PRIORITY_HIGH,NULL); dispatch_set_target_queue(queue,high);
If you wanted to you Could do the exact same with your own queues to create the queue hierarchies that I described earlier on.
Suspending Queues Additionally you may need to suspend queue''s which you can do with dispatch_suspend(queue)
. This runs exactly like NSOperationQueue in that it won''t suspend execution of the current block,but it will stop the queue dequeueing any more blocks. You should be aware of how you do this though,for example in the next example it''s not clear at all what''s actually run
In the above example it''s not clear at all what has run,because it''s entirely possible that any combination of blocks may have run.
Memory Management It may seem a bit odd,but even in fully Garbage Collected code you still have to call dispatch_retain()
and dispatch_release()
on your grand central dispatch objects,because as of right Now they don''t participate in garbage collection.
Recursive Decomposition Now calling dispatch_async()
is okay to run code in a background thread,but we need to update that work back in the main thread,how how would one go about this? Well we can use that main queue and justdispatch_async()
back to the main thread from within the first dispatch_async()
call and update the UI there. Apple has referred to this as recursive decomposition,and it works like this
dispatch_queue_t queue = dispatch_queue_create(“com.app.task”,NULL) dispatch_queue_t main = dispatch_get_main_queue(); dispatch_async(queue,^{ CGFLoat num = [self doSomeMassivecomputation]; dispatch_async(main,^{ [self updateUIWithNumber:num]; }); });
In this bit of code the computation is offloaded onto a background thread with dispatch_async()
and then all we need to do is dispatch_async()
back into the main queue which will schedule our block to run with the updated data that we computed in the background thread. This is generally the most preferable approach to using grand central dispatch is that it works best with this asynchronous design pattern. If you really need to use dispatch_sync()
and absolutely make sure a block has run before going on for some reason,you Could accomplish the same thing with this bit of code
dispatch_sync()
works just like dispatch_async()
in that it takes a queue as an argument and a block to submit to the queue,but dispatch_sync()
does not return until the block you''ve submitted to the queue has finished executing. So in other words the [self updateUIWithNumber:num];
code is guaranteed to not execute before the code in the block has finished running on another thread. dispatch_sync()
will work just fine,but remember that Grand Central dispatch works best with asynchronous design patterns like the first bit of code where we simply dispatch_async()
back to the main queue to update the user interface as appropriate.
dispatch_apply() dispatch_async()
and dispatch_sync()
are all okay for dispatching bits of code one at a time,but if you need to dispatch many blocks at once this is inefficient. You Could use a for loop to dispatch many blocks,but luckly GCD has a built in function for doing this and automatically waiting till the blocks all have executed. dispatch_apply()
is really aimed at going through an array of items and then continuing execution of code after all the blocks have executed,like so
This is GCD''s way of going through arrays,you''ll see later on that Apple has added Cocoa API''s for accomplishing this with NSArrays''s,NSSets,etc. dispatch_apply()
will take your block and iterate over the array as concurrently as it can. I''ve run it sometimes where it takes the indexes 0,4,6,8 on Core 1 and 1,5,7,9 on Core 2 and sometimes it''s done odd patterns where it does most of the items on 1 and some on core 2,the point being that you don''t kNow how concurrent it will be,but you do kNow GCD will iterate over your array or dispatch all the blocks within the max count you give it as concurrently as it can and then once it''s done you just go on and work with your updated data.
dispatch Groups dispatch Groups were created to group several blocks together and then dispatch another block upon all the blocks in the group completing their execution. Groups are setup very easily and the Syntax isn''t very dissimilar fromdispatch_async()
. The API dispatch_group_notify()
is what sets the final block to be executed upon all the other blocks finishing their execution.
Other GCD API you may be Interested in
//Make sure GCD dispatches a block only 1 time dispatch_once() //dispatch a Block after a period of time dispatch_after() //Print Debugging information dispatch_debug() //Create a new dispatch source to monitor low-level System objects //and automatically submit a handler block to a dispatch queue in response to events. dispatch_source_create()
Cocoa & Grand Central dispatch/Blocks
The GCD API''s for being low level API''s are very easy to write and quite frankly I love them and have no problem using them,but they are not appropriate for all situations. Apple has implemented many new API''s in Mac OS X 10.6 SNow Leopard that take advantage of Blocks and Grand Central dispatch such that you can work with existing classes easier & faster and when possible concurrently.
NSOperation and NSBlockOperation NSOperation has been entirely rewritten on top of GCD to take advantage of it and provide some new functionality. In Leopard when you used NSOperation(Queue) it created and killed a thread for every NSOperation object,in Mac OS X 10.6 Now it uses GCD and will reuse threads to give you a nice performance boost. Additionally Apple has added a new NSOperation subclass called NSBlockOperation to which you can add a block and add multiple blocks. Apple has additionally added a completion block method to NSOperation where you can specify a block to be executed upon a NSOperation object completing (goodbye KVO for many NSOperation Objects.)
NSBlockOperation can be a nice easy way to use everything that NSOperation offers and still use blocks with NSOperation.
NSOperationQueue *queue = [[NSOperationQueue alloc] init]; NSBlockOperation *operation = [NSBlockOperation blockOperationWithBlock:^{ NSLog(@"Doing something..."); }]; //you can add more blocks [operation addExecutionBlock:^{ NSLog(@"Another block"); }]; [operation setCompletionBlock:^{ NSLog(@"Doing something once the operation has finished..."); }]; [queue addOperation:operation];
in this way it starts to make the NSBlockOperation look exactly like high level dispatch groups in that you can add multiple blocks and set a completion block to be executed.
Concurrent Enumeration Methods
One of the biggest implications of Blocks and Grand Central dispatch is adding support for them throughout the Cocoa API''s to make working with Cocoa/Objective-C easier and faster. Here are a couple of examples of enumerating over a NSDictionary using just a block and enumerating over a block concurrently.
//non concurrent dictionary enumeration with a block [dict enumerateKeysAndobjectsUsingBlock:^(id key,id obj,BOOL *stop) { NSLog(@"Enumerating Key %@ and Value %@",key,obj); }]; //concurrent dictionary enumeration with a block [dict enumerateKeysAndobjectsWithOptions:NSEnumerationConcurrent usingBlock:^(id key,BOOL *stop) { NSLog(@"Enumerating Key %@ and Value %@",obj); }];
The Documentation is a little dry on what happens here saying just "Applies a given block object to the entries of the receiver." What it doesn''t make mention of is that because it has a block reference it can do this concurrently and GCD will take care of all the details of how it accomplishes this concurrency for you. You Could also use the BOOL *stop
pointer and search for objects inside NSDictionary and just set *stop = YES;
to stop any further enumeration inside the block once you''ve found the key you are looking for.
High Level Cocoa API''s vs Low Level GCD API''s
Chris Hanson earlier wrote about why you should use NSOperation vs GCD API''s. He does make some good points,however I will say that I haven''t actually used NSOperation yet on Mac OS X 10.6 (although I definitely will be using it later on in the development of my app) because the Grand Central dispatch API''s are very easy to use and read and I really enjoy using them. Although he wants you to use NSOperation,I would say use what you like and is appropriate to the situation. I would say one reason I really haven''t used NSOperation is because when GCD was introduced at WWDC,I heard over and over about the GCD API,and I saw how great it was and I can''t really remember NSOperation or NSBlockOperation being talked about much.
To Chris''s credit he does make good points about NSOperation handling dependencies better and you can use KVO if you need to use it with NSOperation Objects. Just about all the things you can do with the basic GCD API''s you can accomplish with NSOperation(Queue) with the same or a minimal couple lines of more code to get the same effect. There are also several Cocoa API that are specifically meant to be used with NSOperationQueue''s,so in those cases you really have no choice but to use NSOperationQueue anyway.
Overall I''d say think what you''ll need to do and why you would need GCD or NSOperation(Queue) and pick appropriately. If you need to you can always write NSBlockOperation objects and then at some point later on convert those blocks to using the GCD API with a minimal amount of effort.
Further Reading on Grand Central dispatch/Blocks
Because I only have so much time to write here,and have to split my time between work and multiple projects,I am linking to people I like who have written some great information about Grand Central dispatch and/or Blocks. Although this article will most definitely not be my last on Grand Central dispatch and/or Blocks.
http://www.friday.com/bbum/2009/08/29/basic-blocks/ http://www.friday.com/bbum/2009/08/29/blocks-tips-tricks/ http://www.mikeash.com/?page=pyblog/friday-qa-2009-08-28-intro-to-grand-central-dispatch-part-i-basics-and-dispatch-queues.html http://www.mikeash.com/?page=pyblog/friday-qa-2009-09-04-intro-to-grand-central-dispatch-part-ii-multi-core-performance.html http://www.mikeash.com/?page=pyblog/friday-qa-2009-09-11-intro-to-grand-central-dispatch-part-iii-dispatch-sources.html
A couple projects making nice use of Blocks
http://github.com/amazingsyco/sscore Andy Matauschak''s KVO with Blocks http://gist.github.com/153676
Interesting Cocoa API''s Making Use of Blocks
A list of some of the Cocoa API''s that make use of blocks (thanks to a certain someone for doing this,really appreciate it.) I should note that Apple has tried not to use the word block everywhere in it''s API for a very good reason. When you come to a new API and you saw something like -[NSArray block]
you would probably think it had something to do with blocking using the NSArray or something where you are blocking execution. Although many API do have block in their name,it is by no means the only keyword you should use when looking for API''s dealing with blocks,for these links to work you must have the Documentation installed on your HD.
NSEvent
addGlobalMonitorForEventsMatchingMask:handler:
addLocalMonitorForEventsMatchingMask:handler:
NSSavePanel
beginSheetModalForWindow:completionHandler:
NSWorkspace
duplicateURLs:completionHandler:
recycleURLs:completionHandler:
NSUserInterfaceItemSearching Protocol
searchForItemsWithSearchString:resultLimit: matchedItemHandler:
NSArray
enumerateObjectsAtIndexes:options:usingBlock:
enumerateObjectsUsingBlock:
enumerateObjectsWithOptions:usingBlock:
indexesOfObjectsAtIndexes:options:passingTest:
indexesOfObjectsPassingTest:
indexesOfObjectsWithOptions:passingTest:
indexOfObjectAtIndexes:options:passingTest:
indexOfObjectPassingTest:
indexOfObjectWithOptions:passingTest:
NSAttributedString
enumerateAttribute:inRange:options:usingBlock:
enumerateAttributesInRange:options:usingBlock:
NSBlockOperation
blockOperationWithBlock:
addExecutionBlock:
executionBlocks
NSDictionary
enumerateKeysAndObjectsUsingBlock:
enumerateKeysAndObjectsWithOptions:usingBlock:
keysOfEntriesPassingTest:
keysOfEntriesWithOptions:passingTest:
NSExpression
expressionForBlock:arguments:
expressionBlock
NSFileManager
enumeratorAtURL:includingPropertiesForKeys: options:errorHandler:
NSIndexSet
enumerateIndexesInRange:options:usingBlock:
enumerateIndexesUsingBlock:
enumerateIndexesWithOptions:usingBlock:
indexesInRange:options:passingTest:
indexesPassingTest:
indexesWithOptions:passingTest:
indexInRange:options:passingTest:
indexPassingTest:
indexWithOptions:passingTest:
NSNotificationCenter
addObserverForName:object:queue:usingBlock:
NSOperation
completionBlock
setCompletionBlock:
NSOperationQueue
addOperationWithBlock:
NSPredicate
predicateWithBlock:
NSSet
objectsPassingTest:
objectsWithOptions:passingTest:
NSString
enumerateLinesUsingBlock:
enumerateSubstringsInRange:options:usingBlock:
CATransaction
completionBlock
android – AlertDialog.builder.setView和Dialog.setContentView有什么区别?
AlertDialog.builder.setView:将自定义视图设置为Dialog的内容.
Dialog.setContentView:将屏幕内容设置为显式视图.
但我仍然有点困惑,任何人都可以更详细地解释它们吗?
解决方法
setContentView就像为Activity设置它一样.它设置了完整的布局.根据您使用的setContentView,它可能是父布局或从xml膨胀的布局
setContentView(视图视图)
Set the screen content to an explicit view. This view is placed directly into the screen’s view hierarchy. It can itself be a complex view hierarhcy.
要么
setContentView(int layoutResID)
Set the screen content from a layout resource. The resource will be inflated,adding all top-level views to the screen.
android – 何时调用dispose并清除CompositeDisposable
我的问题可能是How to use CompositeDisposable of RxJava 2?的重复但要求澄清一个疑问.
根据接受的答案
// Using clear will clear all, but can accept new disposable
disposables.clear();
// Using dispose will clear all and set isdisposed = true, so it will not accept any new disposable
disposables.dispose();
在我的例子中,我使用片段作为我的视图(在MVP中查看图层),在某些情况下,我将活动片段添加到backstack,实际上不会杀死Fragment但只会查看其视图.这意味着只调用onDestroyView而不是onDestroy.后来我可以回到Backstack中的同一个片段,所以只有它的视图才被重新创建.
我有一个Compositedisposable作为我的BaseFragment的成员,它持有订阅.
我的问题是,我应该每次在onDestroyView上调用Compositedisposable吗?一旦视图恢复,它可以再次获取订阅吗?并且在onDestroy上调用dispose,这样当片段本身被销毁时,不再需要使用一次性用品了吗?
如果错了,处理的正确方法是什么.当清除和处置必须被调用.?
解决方法:
您是对的,您可以保存自己在每次创建相应视图时创建新的Compositedisposable,而是将Compositedisposable视为绑定到onCreate / onDestroy生命周期方法的单个实例,并将聚合的一次性用作片段视图的一部分在onDestroyView中调用clear.
我们今天的关于build立连接后需要调用DisposeLocalCopyOfClientHandle和的分享已经告一段落,感谢您的关注,如果您想了解更多关于(in Fedora 31) building AOSP 9 - flex-2.5.39: loadlocale.c:130: _nl_intern_locale_data: failed、A Guide to Blocks & Grand Central Dispatch (and the Cocoa API's making use of them)、android – AlertDialog.builder.setView和Dialog.setContentView有什么区别?、android – 何时调用dispose并清除CompositeDisposable的相关信息,请在本站查询。
本文标签: