(index<- ) ./libstd/rt/kill.rs
1 // Copyright 2013 The Rust Project Developers. See the COPYRIGHT
2 // file at the top-level directory of this distribution and at
3 // http://rust-lang.org/COPYRIGHT.
4 //
5 // Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
6 // http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
7 // <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
8 // option. This file may not be copied, modified, or distributed
9 // except according to those terms.
10
11 /*!
12
13 Task death: asynchronous killing, linked failure, exit code propagation.
14
15 This file implements two orthogonal building-blocks for communicating failure
16 between tasks. One is 'linked failure' or 'task killing', that is, a failing
17 task causing other tasks to fail promptly (even those that are blocked on
18 pipes or I/O). The other is 'exit code propagation', which affects the result
19 observed by the parent of a task::try task that itself spawns child tasks
20 (such as any #[test] function). In both cases the data structures live in
21 KillHandle.
22
23
24 I. Task killing.
25
26 The model for killing involves two atomic flags, the "kill flag" and the
27 "unkillable flag". Operations on the kill flag include:
28
29 - In the taskgroup code (task/spawn.rs), tasks store a clone of their
30 KillHandle in their shared taskgroup. Another task in the group that fails
31 will use that handle to call kill().
32 - When a task blocks, it turns its ~Task into a BlockedTask by storing a
33 the transmuted ~Task pointer inside the KillHandle's kill flag. A task
34 trying to block and a task trying to kill it can simultaneously access the
35 kill flag, after which the task will get scheduled and fail (no matter who
36 wins the race). Likewise, a task trying to wake a blocked task normally and
37 a task trying to kill it can simultaneously access the flag; only one will
38 get the task to reschedule it.
39
40 Operations on the unkillable flag include:
41
42 - When a task becomes unkillable, it swaps on the flag to forbid any killer
43 from waking it up while it's blocked inside the unkillable section. If a
44 kill was already pending, the task fails instead of becoming unkillable.
45 - When a task is done being unkillable, it restores the flag to the normal
46 running state. If a kill was received-but-blocked during the unkillable
47 section, the task fails at this later point.
48 - When a task tries to kill another task, before swapping on the kill flag, it
49 first swaps on the unkillable flag, to see if it's "allowed" to wake up the
50 task. If it isn't, the killed task will receive the signal when it becomes
51 killable again. (Of course, a task trying to wake the task normally (e.g.
52 sending on a channel) does not access the unkillable flag at all.)
53
54 Why do we not need acquire/release barriers on any of the kill flag swaps?
55 This is because barriers establish orderings between accesses on different
56 memory locations, but each kill-related operation is only a swap on a single
57 location, so atomicity is all that matters. The exception is kill(), which
58 does a swap on both flags in sequence. kill() needs no barriers because it
59 does not matter if its two accesses are seen reordered on another CPU: if a
60 killer does perform both writes, it means it saw a KILL_RUNNING in the
61 unkillable flag, which means an unkillable task will see KILL_KILLED and fail
62 immediately (rendering the subsequent write to the kill flag unnecessary).
63
64
65 II. Exit code propagation.
66
67 The basic model for exit code propagation, which is used with the "watched"
68 spawn mode (on by default for linked spawns, off for supervised and unlinked
69 spawns), is that a parent will wait for all its watched children to exit
70 before reporting whether it succeeded or failed. A watching parent will only
71 report success if it succeeded and all its children also reported success;
72 otherwise, it will report failure. This is most useful for writing test cases:
73
74 ```
75 #[test]
76 fn test_something_in_another_task {
77 do spawn {
78 assert!(collatz_conjecture_is_false());
79 }
80 }
81 ```
82
83 Here, as the child task will certainly outlive the parent task, we might miss
84 the failure of the child when deciding whether or not the test case passed.
85 The watched spawn mode avoids this problem.
86
87 In order to propagate exit codes from children to their parents, any
88 'watching' parent must wait for all of its children to exit before it can
89 report its final exit status. We achieve this by using an UnsafeArc, using the
90 reference counting to track how many children are still alive, and using the
91 unwrap() operation in the parent's exit path to wait for all children to exit.
92 The UnsafeArc referred to here is actually the KillHandle itself.
93
94 This also works transitively, as if a "middle" watched child task is itself
95 watching a grandchild task, the "middle" task will do unwrap() on its own
96 KillHandle (thereby waiting for the grandchild to exit) before dropping its
97 reference to its watching parent (which will alert the parent).
98
99 While UnsafeArc::unwrap() accomplishes the synchronization, there remains the
100 matter of reporting the exit codes themselves. This is easiest when an exiting
101 watched task has no watched children of its own:
102
103 - If the task with no watched children exits successfully, it need do nothing.
104 - If the task with no watched children has failed, it sets a flag in the
105 parent's KillHandle ("any_child_failed") to false. It then stays false forever.
106
107 However, if a "middle" watched task with watched children of its own exits
108 before its child exits, we need to ensure that the grandparent task may still
109 see a failure from the grandchild task. While we could achieve this by having
110 each intermediate task block on its handle, this keeps around the other resources
111 the task was using. To be more efficient, this is accomplished via "tombstones".
112
113 A tombstone is a closure, ~fn() -> bool, which will perform any waiting necessary
114 to collect the exit code of descendant tasks. In its environment is captured
115 the KillHandle of whichever task created the tombstone, and perhaps also any
116 tombstones that that task itself had, and finally also another tombstone,
117 effectively creating a lazy-list of heap closures.
118
119 When a child wishes to exit early and leave tombstones behind for its parent,
120 it must use a LittleLock (pthread mutex) to synchronize with any possible
121 sibling tasks which are trying to do the same thing with the same parent.
122 However, on the other side, when the parent is ready to pull on the tombstones,
123 it need not use this lock, because the unwrap() serves as a barrier that ensures
124 no children will remain with references to the handle.
125
126 The main logic for creating and assigning tombstones can be found in the
127 function reparent_children_to() in the impl for KillHandle.
128
129
130 IIA. Issues with exit code propagation.
131
132 There are two known issues with the current scheme for exit code propagation.
133
134 - As documented in issue #8136, the structure mandates the possibility for stack
135 overflow when collecting tombstones that are very deeply nested. This cannot
136 be avoided with the closure representation, as tombstones end up structured in
137 a sort of tree. However, notably, the tombstones do not actually need to be
138 collected in any particular order, and so a doubly-linked list may be used.
139 However we do not do this yet because DList is in libextra.
140
141 - A discussion with Graydon made me realize that if we decoupled the exit code
142 propagation from the parents-waiting action, this could result in a simpler
143 implementation as the exit codes themselves would not have to be propagated,
144 and could instead be propagated implicitly through the taskgroup mechanism
145 that we already have. The tombstoning scheme would still be required. I have
146 not implemented this because currently we can't receive a linked failure kill
147 signal during the task cleanup activity, as that is currently "unkillable",
148 and occurs outside the task's unwinder's "try" block, so would require some
149 restructuring.
150
151 */
152
153 use cast;
154 use cell::Cell;
155 use either::{Either, Left, Right};
156 use option::{Option, Some, None};
157 use prelude::*;
158 use rt::task::Task;
159 use task::spawn::Taskgroup;
160 use to_bytes::IterBytes;
161 use unstable::atomics::{AtomicUint, Relaxed};
162 use unstable::sync::{UnsafeArc, LittleLock};
163 use util;
164
165 static KILLED_MSG: &'static str = "killed by linked failure";
166
167 // State values for the 'killed' and 'unkillable' atomic flags below.
168 static KILL_RUNNING: uint = 0;
169 static KILL_KILLED: uint = 1;
170 static KILL_UNKILLABLE: uint = 2;
171
172 struct KillFlag(AtomicUint);
173 type KillFlagHandle = UnsafeArc<KillFlag>;
174
175 /// A handle to a blocked task. Usually this means having the ~Task pointer by
176 /// ownership, but if the task is killable, a killer can steal it at any time.
177 pub enum BlockedTask {
178 Unkillable(~Task),
179 Killable(KillFlagHandle),
180 }
181
182 // FIXME(#7544)(bblum): think about the cache efficiency of this
183 struct KillHandleInner {
184 // Is the task running, blocked, or killed? Possible values:
185 // * KILL_RUNNING - Not unkillable, no kill pending.
186 // * KILL_KILLED - Kill pending.
187 // * <ptr> - A transmuted blocked ~Task pointer.
188 // This flag is refcounted because it may also be referenced by a blocking
189 // concurrency primitive, used to wake the task normally, whose reference
190 // may outlive the handle's if the task is killed.
191 killed: KillFlagHandle,
192 // Has the task deferred kill signals? This flag guards the above one.
193 // Possible values:
194 // * KILL_RUNNING - Not unkillable, no kill pending.
195 // * KILL_KILLED - Kill pending.
196 // * KILL_UNKILLABLE - Kill signals deferred.
197 unkillable: AtomicUint,
198
199 // Shared state between task and children for exit code propagation. These
200 // are here so we can re-use the kill handle to implement watched children
201 // tasks. Using a separate Arc-like would introduce extra atomic adds/subs
202 // into common spawn paths, so this is just for speed.
203
204 // Locklessly accessed; protected by the enclosing refcount's barriers.
205 any_child_failed: bool,
206 // A lazy list, consuming which may unwrap() many child tombstones.
207 child_tombstones: Option<~fn() -> bool>,
208 // Protects multiple children simultaneously creating tombstones.
209 graveyard_lock: LittleLock,
210 }
211
212 /// State shared between tasks used for task killing during linked failure.
213 #[deriving(Clone)]
214 pub struct KillHandle(UnsafeArc<KillHandleInner>);
215
216 /// Per-task state related to task death, killing, failure, etc.
217 pub struct Death {
218 // Shared among this task, its watched children, and any linked tasks who
219 // might kill it. This is optional so we can take it by-value at exit time.
220 kill_handle: Option<KillHandle>,
221 // Handle to a watching parent, if we have one, for exit code propagation.
222 watching_parent: Option<KillHandle>,
223 // Action to be done with the exit code. If set, also makes the task wait
224 // until all its watched children exit before collecting the status.
225 on_exit: Option<~fn(bool)>,
226 // nesting level counter for task::unkillable calls (0 == killable).
227 unkillable: int,
228 // nesting level counter for unstable::atomically calls (0 == can deschedule).
229 wont_sleep: int,
230 // A "spare" handle to the kill flag inside the kill handle. Used during
231 // blocking/waking as an optimization to avoid two xadds on the refcount.
232 spare_kill_flag: Option<KillFlagHandle>,
233 }
234
235 impl Drop for KillFlag {
236 // Letting a KillFlag with a task inside get dropped would leak the task.
237 // We could free it here, but the task should get awoken by hand somehow.
238 fn drop(&mut self) {
239 match self.load(Relaxed) {
240 KILL_RUNNING | KILL_KILLED => { },
241 _ => rtabort!("can't drop kill flag with a blocked task inside!"),
242 }
243 }
244 }
245
246 // Whenever a task blocks, it swaps out its spare kill flag to use as the
247 // blocked task handle. So unblocking a task must restore that spare.
248 unsafe fn revive_task_ptr(task_ptr: uint, spare_flag: Option<KillFlagHandle>) -> ~Task {
249 let mut task: ~Task = cast::transmute(task_ptr);
250 if task.death.spare_kill_flag.is_none() {
251 task.death.spare_kill_flag = spare_flag;
252 } else {
253 // A task's spare kill flag is not used for blocking in one case:
254 // when an unkillable task blocks on select. In this case, a separate
255 // one was created, which we now discard.
256 rtassert!(task.death.unkillable > 0);
257 }
258 task
259 }
260
261 impl BlockedTask {
262 /// Returns Some if the task was successfully woken; None if already killed.
263 pub fn wake(self) -> Option<~Task> {
264 match self {
265 Unkillable(task) => Some(task),
266 Killable(flag_arc) => {
267 let flag = unsafe { &mut **flag_arc.get() };
268 match flag.swap(KILL_RUNNING, Relaxed) {
269 KILL_RUNNING => None, // woken from select(), perhaps
270 KILL_KILLED => None, // a killer stole it already
271 task_ptr =>
272 Some(unsafe { revive_task_ptr(task_ptr, Some(flag_arc)) })
273 }
274 }
275 }
276 }
277
278 /// Create a blocked task, unless the task was already killed.
279 pub fn try_block(mut task: ~Task) -> Either<~Task, BlockedTask> {
280 // NB: As an optimization, we could give a free pass to being unkillable
281 // to tasks whose taskgroups haven't been initialized yet, but that
282 // introduces complications with select() and with the test cases below,
283 // and it's not clear the uncommon performance boost is worth it.
284 if task.death.unkillable > 0 {
285 Right(Unkillable(task))
286 } else {
287 rtassert!(task.death.kill_handle.is_some());
288 unsafe {
289 // The inverse of 'revive', above, occurs here.
290 // The spare kill flag will usually be Some, unless the task was
291 // already killed, in which case the killer will have deferred
292 // creating a new one until whenever it blocks during unwinding.
293 let flag_arc = match task.death.spare_kill_flag.take() {
294 Some(spare_flag) => spare_flag,
295 None => {
296 // A task that kills us won't have a spare kill flag to
297 // give back to us, so we restore it ourselves here. This
298 // situation should only arise when we're already failing.
299 rtassert!(task.unwinder.unwinding);
300 (*task.death.kill_handle.get_ref().get()).killed.clone()
301 }
302 };
303 let flag = &mut **flag_arc.get();
304 let task_ptr = cast::transmute(task);
305 // Expect flag to contain RUNNING. If KILLED, it should stay KILLED.
306 match flag.compare_and_swap(KILL_RUNNING, task_ptr, Relaxed) {
307 KILL_RUNNING => Right(Killable(flag_arc)),
308 KILL_KILLED => Left(revive_task_ptr(task_ptr, Some(flag_arc))),
309 x => rtabort!("can't block task! kill flag = {}", x),
310 }
311 }
312 }
313 }
314
315 /// Converts one blocked task handle to a list of many handles to the same.
316 pub fn make_selectable(self, num_handles: uint) -> ~[BlockedTask] {
317 let handles = match self {
318 Unkillable(task) => {
319 let flag = unsafe { KillFlag(AtomicUint::new(cast::transmute(task))) };
320 UnsafeArc::newN(flag, num_handles)
321 }
322 Killable(flag_arc) => flag_arc.cloneN(num_handles),
323 };
324 // Even if the task was unkillable before, we use 'Killable' because
325 // multiple pipes will have handles. It does not really mean killable.
326 handles.move_iter().map(|x| Killable(x)).collect()
327 }
328
329 // This assertion has two flavours because the wake involves an atomic op.
330 // In the faster version, destructors will fail dramatically instead.
331 #[inline] #[cfg(not(test))]
332 pub fn assert_already_awake(self) { }
333 #[inline] #[cfg(test)]
334 pub fn assert_already_awake(self) { assert!(self.wake().is_none()); }
335
336 /// Convert to an unsafe uint value. Useful for storing in a pipe's state flag.
337 #[inline]
338 pub unsafe fn cast_to_uint(self) -> uint {
339 // Use the low bit to distinguish the enum variants, to save a second
340 // allocation in the indestructible case.
341 match self {
342 Unkillable(task) => {
343 let blocked_task_ptr: uint = cast::transmute(task);
344 rtassert!(blocked_task_ptr & 0x1 == 0);
345 blocked_task_ptr
346 },
347 Killable(flag_arc) => {
348 let blocked_task_ptr: uint = cast::transmute(~flag_arc);
349 rtassert!(blocked_task_ptr & 0x1 == 0);
350 blocked_task_ptr | 0x1
351 }
352 }
353 }
354
355 /// Convert from an unsafe uint value. Useful for retrieving a pipe's state flag.
356 #[inline]
357 pub unsafe fn cast_from_uint(blocked_task_ptr: uint) -> BlockedTask {
358 if blocked_task_ptr & 0x1 == 0 {
359 Unkillable(cast::transmute(blocked_task_ptr))
360 } else {
361 let ptr: ~KillFlagHandle = cast::transmute(blocked_task_ptr & !0x1);
362 match ptr {
363 ~flag_arc => Killable(flag_arc)
364 }
365 }
366 }
367 }
368
369 // So that KillHandle can be hashed in the taskgroup bookkeeping code.
370 impl IterBytes for KillHandle {
371 fn iter_bytes(&self, lsb0: bool, f: &fn(buf: &[u8]) -> bool) -> bool {
372 self.data.iter_bytes(lsb0, f)
373 }
374 }
375 impl Eq for KillHandle {
376 #[inline] fn eq(&self, other: &KillHandle) -> bool { self.data.eq(&other.data) }
377 #[inline] fn ne(&self, other: &KillHandle) -> bool { self.data.ne(&other.data) }
378 }
379
380 impl KillHandle {
381 pub fn new() -> (KillHandle, KillFlagHandle) {
382 let (flag, flag_clone) =
383 UnsafeArc::new2(KillFlag(AtomicUint::new(KILL_RUNNING)));
384 let handle = KillHandle(UnsafeArc::new(KillHandleInner {
385 // Linked failure fields
386 killed: flag,
387 unkillable: AtomicUint::new(KILL_RUNNING),
388 // Exit code propagation fields
389 any_child_failed: false,
390 child_tombstones: None,
391 graveyard_lock: LittleLock::new(),
392 }));
393 (handle, flag_clone)
394 }
395
396 // Will begin unwinding if a kill signal was received, unless already_failing.
397 // This can't be used recursively, because a task which sees a KILLED
398 // signal must fail immediately, which an already-unkillable task can't do.
399 #[inline]
400 pub fn inhibit_kill(&mut self, already_failing: bool) {
401 let inner = unsafe { &mut *self.get() };
402 // Expect flag to contain RUNNING. If KILLED, it should stay KILLED.
403 // FIXME(#7544)(bblum): is it really necessary to prohibit double kill?
404 match inner.unkillable.compare_and_swap(KILL_RUNNING, KILL_UNKILLABLE, Relaxed) {
405 KILL_RUNNING => { }, // normal case
406 KILL_KILLED => if !already_failing { fail2!("{}", KILLED_MSG) },
407 _ => rtabort!("inhibit_kill: task already unkillable"),
408 }
409 }
410
411 // Will begin unwinding if a kill signal was received, unless already_failing.
412 #[inline]
413 pub fn allow_kill(&mut self, already_failing: bool) {
414 let inner = unsafe { &mut *self.get() };
415 // Expect flag to contain UNKILLABLE. If KILLED, it should stay KILLED.
416 // FIXME(#7544)(bblum): is it really necessary to prohibit double kill?
417 match inner.unkillable.compare_and_swap(KILL_UNKILLABLE, KILL_RUNNING, Relaxed) {
418 KILL_UNKILLABLE => { }, // normal case
419 KILL_KILLED => if !already_failing { fail2!("{}", KILLED_MSG) },
420 _ => rtabort!("allow_kill: task already killable"),
421 }
422 }
423
424 // Send a kill signal to the handle's owning task. Returns the task itself
425 // if it was blocked and needs punted awake. To be called by other tasks.
426 pub fn kill(&mut self) -> Option<~Task> {
427 let inner = unsafe { &mut *self.get() };
428 if inner.unkillable.swap(KILL_KILLED, Relaxed) == KILL_RUNNING {
429 // Got in. Allowed to try to punt the task awake.
430 let flag = unsafe { &mut *inner.killed.get() };
431 match flag.swap(KILL_KILLED, Relaxed) {
432 // Task either not blocked or already taken care of.
433 KILL_RUNNING | KILL_KILLED => None,
434 // Got ownership of the blocked task.
435 // While the usual 'wake' path can just pass back the flag
436 // handle, we (the slower kill path) haven't an extra one lying
437 // around. The task will wake up without a spare.
438 task_ptr => Some(unsafe { revive_task_ptr(task_ptr, None) }),
439 }
440 } else {
441 // Otherwise it was either unkillable or already killed. Somebody
442 // else was here first who will deal with the kill signal.
443 None
444 }
445 }
446
447 #[inline]
448 pub fn killed(&self) -> bool {
449 // Called every context switch, so shouldn't report true if the task
450 // is unkillable with a kill signal pending.
451 let inner = unsafe { &*self.get() };
452 let flag = unsafe { &*inner.killed.get() };
453 // A barrier-related concern here is that a task that gets killed
454 // awake needs to see the killer's write of KILLED to this flag. This
455 // is analogous to receiving a pipe payload; the appropriate barrier
456 // should happen when enqueueing the task.
457 flag.load(Relaxed) == KILL_KILLED
458 }
459
460 pub fn notify_immediate_failure(&mut self) {
461 // A benign data race may happen here if there are failing sibling
462 // tasks that were also spawned-watched. The refcount's write barriers
463 // in UnsafeArc ensure that this write will be seen by the
464 // unwrapper/destructor, whichever task may unwrap it.
465 unsafe { (*self.get()).any_child_failed = true; }
466 }
467
468 // For use when a task does not need to collect its children's exit
469 // statuses, but the task has a parent which might want them.
470 pub fn reparent_children_to(self, parent: &mut KillHandle) {
471 // Optimistic path: If another child of the parent's already failed,
472 // we don't need to worry about any of this.
473 if unsafe { (*parent.get()).any_child_failed } {
474 return;
475 }
476
477 // Try to see if all our children are gone already.
478 match self.try_unwrap() {
479 // Couldn't unwrap; children still alive. Reparent entire handle as
480 // our own tombstone, to be unwrapped later.
481 Left(this) => {
482 let this = Cell::new(this); // :(
483 do add_lazy_tombstone(parent) |other_tombstones| {
484 let this = Cell::new(this.take()); // :(
485 let others = Cell::new(other_tombstones); // :(
486 || {
487 // Prefer to check tombstones that were there first,
488 // being "more fair" at the expense of tail-recursion.
489 others.take().map_default(true, |f| f()) && {
490 let mut inner = this.take().unwrap();
491 (!inner.any_child_failed) &&
492 inner.child_tombstones.take().map_default(true, |f| f())
493 }
494 }
495 }
496 }
497 // Whether or not all children exited, one or more already failed.
498 Right(KillHandleInner { any_child_failed: true, _ }) => {
499 parent.notify_immediate_failure();
500 }
501 // All children exited, but some left behind tombstones that we
502 // don't want to wait on now. Give them to our parent.
503 Right(KillHandleInner { any_child_failed: false,
504 child_tombstones: Some(f), _ }) => {
505 let f = Cell::new(f); // :(
506 do add_lazy_tombstone(parent) |other_tombstones| {
507 let f = Cell::new(f.take()); // :(
508 let others = Cell::new(other_tombstones); // :(
509 || {
510 // Prefer fairness to tail-recursion, as in above case.
511 others.take().map_default(true, |f| f()) &&
512 f.take()()
513 }
514 }
515 }
516 // All children exited, none failed. Nothing to do!
517 Right(KillHandleInner { any_child_failed: false,
518 child_tombstones: None, _ }) => { }
519 }
520
521 // NB: Takes a pthread mutex -- 'blk' not allowed to reschedule.
522 #[inline]
523 fn add_lazy_tombstone(parent: &mut KillHandle,
524 blk: &fn(Option<~fn() -> bool>) -> ~fn() -> bool) {
525
526 let inner: &mut KillHandleInner = unsafe { &mut *parent.get() };
527 unsafe {
528 do inner.graveyard_lock.lock {
529 // Update the current "head node" of the lazy list.
530 inner.child_tombstones =
531 Some(blk(util::replace(&mut inner.child_tombstones, None)));
532 }
533 }
534 }
535 }
536 }
537
538 impl Death {
539 pub fn new() -> Death {
540 let (handle, spare) = KillHandle::new();
541 Death {
542 kill_handle: Some(handle),
543 watching_parent: None,
544 on_exit: None,
545 unkillable: 0,
546 wont_sleep: 0,
547 spare_kill_flag: Some(spare),
548 }
549 }
550
551 pub fn new_child(&self) -> Death {
552 // FIXME(#7327)
553 let (handle, spare) = KillHandle::new();
554 Death {
555 kill_handle: Some(handle),
556 watching_parent: self.kill_handle.clone(),
557 on_exit: None,
558 unkillable: 0,
559 wont_sleep: 0,
560 spare_kill_flag: Some(spare),
561 }
562 }
563
564 /// Collect failure exit codes from children and propagate them to a parent.
565 pub fn collect_failure(&mut self, mut success: bool, group: Option<Taskgroup>) {
566 // This may run after the task has already failed, so even though the
567 // task appears to need to be killed, the scheduler should not fail us
568 // when we block to unwrap.
569 // (XXX: Another less-elegant reason for doing this is so that the use
570 // of the LittleLock in reparent_children_to doesn't need to access the
571 // unkillable flag in the kill_handle, since we'll have removed it.)
572 rtassert!(self.unkillable == 0);
573 self.unkillable = 1;
574
575 // NB. See corresponding comment at the callsite in task.rs.
576 // FIXME(#8192): Doesn't work with "let _ = ..."
577 { use util; util::ignore(group); }
578
579 // Step 1. Decide if we need to collect child failures synchronously.
580 do self.on_exit.take().map |on_exit| {
581 if success {
582 // We succeeded, but our children might not. Need to wait for them.
583 let mut inner = self.kill_handle.take_unwrap().unwrap();
584 if inner.any_child_failed {
585 success = false;
586 } else {
587 // Lockless access to tombstones protected by unwrap barrier.
588 success = inner.child_tombstones.take().map_default(true, |f| f());
589 }
590 }
591 on_exit(success);
592 };
593
594 // Step 2. Possibly alert possibly-watching parent to failure status.
595 // Note that as soon as parent_handle goes out of scope, the parent
596 // can successfully unwrap its handle and collect our reported status.
597 do self.watching_parent.take().map |mut parent_handle| {
598 if success {
599 // Our handle might be None if we had an exit callback, and
600 // already unwrapped it. But 'success' being true means no
601 // child failed, so there's nothing to do (see below case).
602 do self.kill_handle.take().map |own_handle| {
603 own_handle.reparent_children_to(&mut parent_handle);
604 };
605 } else {
606 // Can inform watching parent immediately that we failed.
607 // (Note the importance of non-failing tasks NOT writing
608 // 'false', which could obscure another task's failure.)
609 parent_handle.notify_immediate_failure();
610 }
611 };
612
613 // Can't use allow_kill directly; that would require the kill handle.
614 rtassert!(self.unkillable == 1);
615 self.unkillable = 0;
616 }
617
618 /// Fails if a kill signal was received.
619 #[inline]
620 pub fn check_killed(&self, already_failing: bool) {
621 match self.kill_handle {
622 Some(ref kill_handle) =>
623 // The task may be both unkillable and killed if it does some
624 // synchronization during unwinding or cleanup (for example,
625 // sending on a notify port). In that case failing won't help.
626 if self.unkillable == 0 && (!already_failing) && kill_handle.killed() {
627 fail2!("{}", KILLED_MSG);
628 },
629 // This may happen during task death (see comments in collect_failure).
630 None => rtassert!(self.unkillable > 0),
631 }
632 }
633
634 /// Enter a possibly-nested unkillable section of code.
635 /// All calls must be paired with a subsequent call to allow_kill.
636 #[inline]
637 pub fn inhibit_kill(&mut self, already_failing: bool) {
638 self.unkillable += 1;
639 // May fail, hence must happen *after* incrementing the counter
640 if self.unkillable == 1 {
641 rtassert!(self.kill_handle.is_some());
642 self.kill_handle.get_mut_ref().inhibit_kill(already_failing);
643 }
644 }
645
646 /// Exit a possibly-nested unkillable section of code.
647 /// All calls must be paired with a preceding call to inhibit_kill.
648 #[inline]
649 pub fn allow_kill(&mut self, already_failing: bool) {
650 if self.unkillable == 0 {
651 // we need to decrement the counter before failing.
652 self.unkillable -= 1;
653 fail2!("Cannot enter a rekillable() block without a surrounding unkillable()");
654 }
655 self.unkillable -= 1;
656 if self.unkillable == 0 {
657 rtassert!(self.kill_handle.is_some());
658 self.kill_handle.get_mut_ref().allow_kill(already_failing);
659 }
660 }
661
662 /// Enter a possibly-nested "atomic" section of code. Just for assertions.
663 /// All calls must be paired with a subsequent call to allow_deschedule.
664 #[inline]
665 pub fn inhibit_deschedule(&mut self) {
666 self.wont_sleep += 1;
667 }
668
669 /// Exit a possibly-nested "atomic" section of code. Just for assertions.
670 /// All calls must be paired with a preceding call to inhibit_deschedule.
671 #[inline]
672 pub fn allow_deschedule(&mut self) {
673 rtassert!(self.wont_sleep != 0);
674 self.wont_sleep -= 1;
675 }
676
677 /// Ensure that the task is allowed to become descheduled.
678 #[inline]
679 pub fn assert_may_sleep(&self) {
680 if self.wont_sleep != 0 {
681 rtabort!("illegal atomic-sleep: attempt to reschedule while \
682 using an Exclusive or LittleLock");
683 }
684 }
685 }
686
687 impl Drop for Death {
688 fn drop(&mut self) {
689 // Mustn't be in an atomic or unkillable section at task death.
690 rtassert!(self.unkillable == 0);
691 rtassert!(self.wont_sleep == 0);
692 }
693 }
694
695 #[cfg(test)]
696 mod test {
697 #[allow(unused_mut)];
698 use cell::Cell;
699 use rt::test::*;
700 use super::*;
701 use util;
702
703 // Test cases don't care about the spare killed flag.
704 fn make_kill_handle() -> KillHandle { let (h,_) = KillHandle::new(); h }
705
706 #[ignore(reason = "linked failure")]
707 #[test]
708 fn no_tombstone_success() {
709 do run_in_newsched_task {
710 // Tests case 4 of the 4-way match in reparent_children.
711 let mut parent = make_kill_handle();
712 let mut child = make_kill_handle();
713
714 // Without another handle to child, the try unwrap should succeed.
715 child.reparent_children_to(&mut parent);
716 let mut parent_inner = parent.unwrap();
717 assert!(parent_inner.child_tombstones.is_none());
718 assert!(parent_inner.any_child_failed == false);
719 }
720 }
721 #[test]
722 fn no_tombstone_failure() {
723 do run_in_newsched_task {
724 // Tests case 2 of the 4-way match in reparent_children.
725 let mut parent = make_kill_handle();
726 let mut child = make_kill_handle();
727
728 child.notify_immediate_failure();
729 // Without another handle to child, the try unwrap should succeed.
730 child.reparent_children_to(&mut parent);
731 let mut parent_inner = parent.unwrap();
732 assert!(parent_inner.child_tombstones.is_none());
733 // Immediate failure should have been propagated.
734 assert!(parent_inner.any_child_failed);
735 }
736 }
737 #[test]
738 fn no_tombstone_because_sibling_already_failed() {
739 do run_in_newsched_task {
740 // Tests "case 0, the optimistic path in reparent_children.
741 let mut parent = make_kill_handle();
742 let mut child1 = make_kill_handle();
743 let mut child2 = make_kill_handle();
744 let mut link = child2.clone();
745
746 // Should set parent's child_failed flag
747 child1.notify_immediate_failure();
748 child1.reparent_children_to(&mut parent);
749 // Should bypass trying to unwrap child2 entirely.
750 // Otherwise, due to 'link', it would try to tombstone.
751 child2.reparent_children_to(&mut parent);
752 // Should successfully unwrap even though 'link' is still alive.
753 let mut parent_inner = parent.unwrap();
754 assert!(parent_inner.child_tombstones.is_none());
755 // Immediate failure should have been propagated by first child.
756 assert!(parent_inner.any_child_failed);
757 util::ignore(link);
758 }
759 }
760 #[test]
761 fn one_tombstone_success() {
762 do run_in_newsched_task {
763 let mut parent = make_kill_handle();
764 let mut child = make_kill_handle();
765 let mut link = child.clone();
766
767 // Creates 1 tombstone. Existence of 'link' makes try-unwrap fail.
768 child.reparent_children_to(&mut parent);
769 // Let parent collect tombstones.
770 util::ignore(link);
771 // Must have created a tombstone
772 let mut parent_inner = parent.unwrap();
773 assert!(parent_inner.child_tombstones.take_unwrap()());
774 assert!(parent_inner.any_child_failed == false);
775 }
776 }
777 #[test]
778 fn one_tombstone_failure() {
779 do run_in_newsched_task {
780 let mut parent = make_kill_handle();
781 let mut child = make_kill_handle();
782 let mut link = child.clone();
783
784 // Creates 1 tombstone. Existence of 'link' makes try-unwrap fail.
785 child.reparent_children_to(&mut parent);
786 // Must happen after tombstone to not be immediately propagated.
787 link.notify_immediate_failure();
788 // Let parent collect tombstones.
789 util::ignore(link);
790 // Must have created a tombstone
791 let mut parent_inner = parent.unwrap();
792 // Failure must be seen in the tombstone.
793 assert!(parent_inner.child_tombstones.take_unwrap()() == false);
794 assert!(parent_inner.any_child_failed == false);
795 }
796 }
797 #[test]
798 fn two_tombstones_success() {
799 do run_in_newsched_task {
800 let mut parent = make_kill_handle();
801 let mut middle = make_kill_handle();
802 let mut child = make_kill_handle();
803 let mut link = child.clone();
804
805 child.reparent_children_to(&mut middle); // case 1 tombstone
806 // 'middle' should try-unwrap okay, but still have to reparent.
807 middle.reparent_children_to(&mut parent); // case 3 tombston
808 // Let parent collect tombstones.
809 util::ignore(link);
810 // Must have created a tombstone
811 let mut parent_inner = parent.unwrap();
812 assert!(parent_inner.child_tombstones.take_unwrap()());
813 assert!(parent_inner.any_child_failed == false);
814 }
815 }
816 #[test]
817 fn two_tombstones_failure() {
818 do run_in_newsched_task {
819 let mut parent = make_kill_handle();
820 let mut middle = make_kill_handle();
821 let mut child = make_kill_handle();
822 let mut link = child.clone();
823
824 child.reparent_children_to(&mut middle); // case 1 tombstone
825 // Must happen after tombstone to not be immediately propagated.
826 link.notify_immediate_failure();
827 // 'middle' should try-unwrap okay, but still have to reparent.
828 middle.reparent_children_to(&mut parent); // case 3 tombstone
829 // Let parent collect tombstones.
830 util::ignore(link);
831 // Must have created a tombstone
832 let mut parent_inner = parent.unwrap();
833 // Failure must be seen in the tombstone.
834 assert!(parent_inner.child_tombstones.take_unwrap()() == false);
835 assert!(parent_inner.any_child_failed == false);
836 }
837 }
838
839 // Task killing tests
840
841 #[test]
842 fn kill_basic() {
843 do run_in_newsched_task {
844 let mut handle = make_kill_handle();
845 assert!(!handle.killed());
846 assert!(handle.kill().is_none());
847 assert!(handle.killed());
848 }
849 }
850
851 #[test]
852 fn double_kill() {
853 do run_in_newsched_task {
854 let mut handle = make_kill_handle();
855 assert!(!handle.killed());
856 assert!(handle.kill().is_none());
857 assert!(handle.killed());
858 assert!(handle.kill().is_none());
859 assert!(handle.killed());
860 }
861 }
862
863 #[test]
864 fn unkillable_after_kill() {
865 do run_in_newsched_task {
866 let mut handle = make_kill_handle();
867 assert!(handle.kill().is_none());
868 assert!(handle.killed());
869 let handle_cell = Cell::new(handle);
870 let result = do spawntask_try {
871 handle_cell.take().inhibit_kill(false);
872 };
873 assert!(result.is_err());
874 }
875 }
876
877 #[test]
878 fn unkillable_during_kill() {
879 do run_in_newsched_task {
880 let mut handle = make_kill_handle();
881 handle.inhibit_kill(false);
882 assert!(handle.kill().is_none());
883 assert!(!handle.killed());
884 let handle_cell = Cell::new(handle);
885 let result = do spawntask_try {
886 handle_cell.take().allow_kill(false);
887 };
888 assert!(result.is_err());
889 }
890 }
891
892 #[test]
893 fn unkillable_before_kill() {
894 do run_in_newsched_task {
895 let mut handle = make_kill_handle();
896 handle.inhibit_kill(false);
897 handle.allow_kill(false);
898 assert!(handle.kill().is_none());
899 assert!(handle.killed());
900 }
901 }
902
903 // Task blocking tests
904
905 #[test]
906 fn block_and_wake() {
907 do with_test_task |mut task| {
908 BlockedTask::try_block(task).unwrap_right().wake().unwrap()
909 }
910 }
911
912 #[ignore(reason = "linked failure")]
913 #[test]
914 fn block_and_get_killed() {
915 do with_test_task |mut task| {
916 let mut handle = task.death.kill_handle.get_ref().clone();
917 let result = BlockedTask::try_block(task).unwrap_right();
918 let task = handle.kill().unwrap();
919 assert!(result.wake().is_none());
920 task
921 }
922 }
923
924 #[ignore(reason = "linked failure")]
925 #[test]
926 fn block_already_killed() {
927 do with_test_task |mut task| {
928 let mut handle = task.death.kill_handle.get_ref().clone();
929 assert!(handle.kill().is_none());
930 BlockedTask::try_block(task).unwrap_left()
931 }
932 }
933
934 #[ignore(reason = "linked failure")]
935 #[test]
936 fn block_unkillably_and_get_killed() {
937 do with_test_task |mut task| {
938 let mut handle = task.death.kill_handle.get_ref().clone();
939 task.death.inhibit_kill(false);
940 let result = BlockedTask::try_block(task).unwrap_right();
941 assert!(handle.kill().is_none());
942 let mut task = result.wake().unwrap();
943 // This call wants to fail, but we can't have that happen since
944 // we're not running in a newsched task, so we can't even use
945 // spawntask_try. But the failing behaviour is already tested
946 // above, in unkillable_during_kill(), so we punt on it here.
947 task.death.allow_kill(true);
948 task
949 }
950 }
951
952 #[ignore(reason = "linked failure")]
953 #[test]
954 fn block_on_pipe() {
955 // Tests the "killable" path of casting to/from uint.
956 do run_in_newsched_task {
957 do with_test_task |mut task| {
958 let result = BlockedTask::try_block(task).unwrap_right();
959 let result = unsafe { result.cast_to_uint() };
960 let result = unsafe { BlockedTask::cast_from_uint(result) };
961 result.wake().unwrap()
962 }
963 }
964 }
965
966 #[ignore(reason = "linked failure")]
967 #[test]
968 fn block_unkillably_on_pipe() {
969 // Tests the "indestructible" path of casting to/from uint.
970 do run_in_newsched_task {
971 do with_test_task |mut task| {
972 task.death.inhibit_kill(false);
973 let result = BlockedTask::try_block(task).unwrap_right();
974 let result = unsafe { result.cast_to_uint() };
975 let result = unsafe { BlockedTask::cast_from_uint(result) };
976 let mut task = result.wake().unwrap();
977 task.death.allow_kill(false);
978 task
979 }
980 }
981 }
982 }
libstd/rt/kill.rs:176:79-176:79 -enum- definition:
/// ownership, but if the task is killable, a killer can steal it at any time.
pub enum BlockedTask {
references:-261: impl BlockedTask {
357: pub unsafe fn cast_from_uint(blocked_task_ptr: uint) -> BlockedTask {
279: pub fn try_block(mut task: ~Task) -> Either<~Task, BlockedTask> {
316: pub fn make_selectable(self, num_handles: uint) -> ~[BlockedTask] {
libstd/rt/sched.rs:
681: pub fn deschedule_running_task_and_then(~self, f: &fn(&mut Scheduler, BlockedTask)) {
662: pub fn resume_blocked_task_immediately(~self, blocked_task: BlockedTask) {
540: pub fn enqueue_blocked_task(&mut self, blocked_task: BlockedTask) {
691: f: &fn(&mut Scheduler, BlockedTask)) {
libstd/rt/uv/uvio.rs:
1545: priv descheduled: Option<BlockedTask>,
libstd/rt/tube.rs:
26: blocked_task: Option<BlockedTask>,
libstd/rt/comm.rs:
533: fn block_on(&mut self, sched: &mut Scheduler, task: BlockedTask) -> bool {
553: fn block_on(&mut self, sched: &mut Scheduler, task: BlockedTask) -> bool {
245: fn block_on(&mut self, sched: &mut Scheduler, task: BlockedTask) -> bool {
libstd/rt/select.rs:
21: fn block_on(&mut self, &mut Scheduler, BlockedTask) -> bool;
libstd/rt/kill.rs:216:65-216:65 -struct- definition:
/// Per-task state related to task death, killing, failure, etc.
pub struct Death {
references:-539: pub fn new() -> Death {
541: Death {
687: impl Drop for Death {
551: pub fn new_child(&self) -> Death {
554: Death {
538: impl Death {
libstd/rt/task.rs:
51: death: Death,
libstd/rt/kill.rs:172:29-172:29 -ty- definition:
struct KillFlag(AtomicUint);
type KillFlagHandle = UnsafeArc<KillFlag>;
references:-248: unsafe fn revive_task_ptr(task_ptr: uint, spare_flag: Option<KillFlagHandle>) -> ~Task {
232: spare_kill_flag: Option<KillFlagHandle>,
361: let ptr: ~KillFlagHandle = cast::transmute(blocked_task_ptr & !0x1);
191: killed: KillFlagHandle,
381: pub fn new() -> (KillHandle, KillFlagHandle) {
179: Killable(KillFlagHandle),
libstd/rt/kill.rs:213:19-213:19 -struct- definition:
#[deriving(Clone)]
pub struct KillHandle(UnsafeArc<KillHandleInner>);
references:-372: self.data.iter_bytes(lsb0, f)
381: pub fn new() -> (KillHandle, KillFlagHandle) {
377: #[inline] fn ne(&self, other: &KillHandle) -> bool { self.data.ne(&other.data) }
376: #[inline] fn eq(&self, other: &KillHandle) -> bool { self.data.eq(&other.data) }
377: #[inline] fn ne(&self, other: &KillHandle) -> bool { self.data.ne(&other.data) }
213: #[deriving(Clone)]
222: watching_parent: Option<KillHandle>,
370: impl IterBytes for KillHandle {
375: impl Eq for KillHandle {
377: #[inline] fn ne(&self, other: &KillHandle) -> bool { self.data.ne(&other.data) }
380: impl KillHandle {
220: kill_handle: Option<KillHandle>,
213: #[deriving(Clone)]
523: fn add_lazy_tombstone(parent: &mut KillHandle,
376: #[inline] fn eq(&self, other: &KillHandle) -> bool { self.data.eq(&other.data) }
470: pub fn reparent_children_to(self, parent: &mut KillHandle) {
376: #[inline] fn eq(&self, other: &KillHandle) -> bool { self.data.eq(&other.data) }
libstd/task/spawn.rs:
385: fn enlist_in_taskgroup(state: TaskGroupInner, me: KillHandle,
530: fn enlist_many(child: &KillHandle, child_arc: &TaskGroupArc,
413: fn kill_taskgroup(state: Option<TaskGroupData>, me: &KillHandle) {
400: fn leave_taskgroup(state: TaskGroupInner, me: &KillHandle, is_member: bool) {
107: fn insert(&mut self, task: KillHandle) {
99: struct TaskSet(HashSet<KillHandle>);
112: fn remove(&mut self, task: &KillHandle) {
450: fn with_task_handle_and_failing(blk: &fn(&KillHandle, bool)) {
441: fn kill_task(mut handle: KillHandle) {
117: fn move_iter(self) -> HashSetMoveIterator<KillHandle> {
libstd/rt/kill.rs:182:65-182:65 -struct- definition:
// FIXME(#7544)(bblum): think about the cache efficiency of this
struct KillHandleInner {
references:-384: let handle = KillHandle(UnsafeArc::new(KillHandleInner {
517: Right(KillHandleInner { any_child_failed: false,
526: let inner: &mut KillHandleInner = unsafe { &mut *parent.get() };
503: Right(KillHandleInner { any_child_failed: false,
498: Right(KillHandleInner { any_child_failed: true, _ }) => {
214: pub struct KillHandle(UnsafeArc<KillHandleInner>);
libstd/rt/kill.rs:171:1-171:1 -struct- definition:
struct KillFlag(AtomicUint);
references:-235: impl Drop for KillFlag {
173: type KillFlagHandle = UnsafeArc<KillFlag>;
libstd/rt/kill.rs:247:70-247:70 -fn- definition:
// blocked task handle. So unblocking a task must restore that spare.
unsafe fn revive_task_ptr(task_ptr: uint, spare_flag: Option<KillFlagHandle>) -> ~Task {
references:-308: KILL_KILLED => Left(revive_task_ptr(task_ptr, Some(flag_arc))),
438: task_ptr => Some(unsafe { revive_task_ptr(task_ptr, None) }),
272: Some(unsafe { revive_task_ptr(task_ptr, Some(flag_arc)) })
libstd/rt/kill.rs:523:8-523:8 -fn- definition:
fn add_lazy_tombstone(parent: &mut KillHandle,
blk: &fn(Option<~fn() -> bool>) -> ~fn() -> bool) {
references:-483: do add_lazy_tombstone(parent) |other_tombstones| {
506: do add_lazy_tombstone(parent) |other_tombstones| {