Python queue get

Python queue get DEFAULT

18.5.8. Queues

A queue, useful for coordinating producer and consumer coroutines.

If maxsize is less than or equal to zero, the queue size is infinite. If it is an integer greater than , then will block when the queue reaches maxsize, until an item is removed by .

Unlike the standard library , you can reliably know this Queue’s size with , since your single-threaded asyncio application won’t be interrupted between calling and doing an operation on the Queue.

This class is not thread safe.

()¶

Return if the queue is empty, otherwise.

()¶

Return if there are items in the queue.

Note

If the Queue was initialized with (the default), then is never .

coroutine ()¶

Remove and return an item from the queue. If queue is empty, wait until an item is available.

This method is a coroutine.

()¶

Remove and return an item from the queue.

Return an item if one is immediately available, else raise .

coroutine ()¶

Block until all items in the queue have been gotten and processed.

The count of unfinished tasks goes up whenever an item is added to the queue. The count goes down whenever a consumer thread calls to indicate that the item was retrieved and all work on it is complete. When the count of unfinished tasks drops to zero, unblocks.

This method is a coroutine.

coroutine (item

Put an item into the queue. If the queue is full, wait until a free slot is available before adding item.

This method is a coroutine.

(item

Put an item into the queue without blocking.

If no free slot is immediately available, raise .

()¶

Number of items in the queue.

()¶

Indicate that a formerly enqueued task is complete.

Used by queue consumers. For each used to fetch a task, a subsequent call to tells the queue that the processing on the task is complete.

If a is currently blocking, it will resume when all items have been processed (meaning that a call was received for every item that had been into the queue).

Raises if called more times than there were items placed in the queue.

Number of items allowed in the queue.

Sours: https://docs.python.org/3.5/library/asyncio-queue.html

Python Queue.get() Examples

The following are 25 code examples for showing how to use Queue.get(). These examples are extracted from open source projects. You can vote up the ones you like or vote down the ones you don't like, and go to the original project or source file by following the links above each example.

You may check out the related API usage on the sidebar.

You may also want to check out all available functions/classes of the module Queue, or try the search function .

Example 1

def make_web(queue): app = Flask(__name__) @app.route('/') def index(): return render_template('index.html') def gen(): while True: frame = queue.get() _, frame = cv2.imencode('.JPEG', frame) yield (b'--frame\r\n' b'Content-Type: image/jpeg\r\n\r\n' + frame.tostring() + b'\r\n') @app.route('/video_feed') def video_feed(): return Response(gen(), mimetype='multipart/x-mixed-replace; boundary=frame') try: app.run(host='0.0.0.0', port=8889) except: print('unable to open port')

Example 2

def make_web(queue): app = Flask(__name__) @app.route('/') def index(): return render_template('index.html') def gen(): while True: frame = queue.get() _, frame = cv2.imencode('.JPEG', frame) yield (b'--frame\r\n' b'Content-Type: image/jpeg\r\n\r\n' + frame.tostring() + b'\r\n') @app.route('/video_feed') def video_feed(): return Response(gen(), mimetype='multipart/x-mixed-replace; boundary=frame') try: app.run(host='0.0.0.0', port=8889) except: print('unable to open port')

Example 3

def get(self, poll_interval=5): while True: try: # Using Queue.get() with a timeout is really expensive - Python uses # busy waiting that wakes up the process every 50ms - so we switch # to a more efficient polling method if there is no activity for # <fast_poll_time> seconds. if time.time() - self.last_item_time < self.fast_poll_time: message = Queue.Queue.get(self, block=True, timeout=poll_interval) else: time.sleep(poll_interval) message = Queue.Queue.get(self, block=False) break except Queue.Empty: self.callback() self.last_item_time = time.time() return message

Example 4

def worker(queue, user, size, outdir, total): while True: try: photo = queue.get(False) except Queue.Empty: break media_url = photo[1] urllib3_download(media_url, size, outdir) with lock: global downloaded downloaded += 1 d = { 'media_url': os.path.basename(media_url), 'user': user, 'index': downloaded + 1 if downloaded < total else total, 'total': total, } progress = PROGRESS_FORMATTER % d sys.stdout.write('\r%s' % progress) sys.stdout.flush()

Example 5

def _worker_manager_loop(in_queue, out_queue, done_event, pin_memory, device_id): if pin_memory: torch.cuda.set_device(device_id) while True: try: r = in_queue.get() except Exception: if done_event.is_set(): return raise if r is None: break if isinstance(r[1], ExceptionWrapper): out_queue.put(r) continue idx, batch = r try: if pin_memory: batch = pin_memory_batch(batch) except Exception: out_queue.put((idx, ExceptionWrapper(sys.exc_info()))) else: out_queue.put((idx, batch))

Example 6

def _set_SIGCHLD_handler(): # Windows doesn't support SIGCHLD handler if sys.platform == 'win32': return # can't set signal in child threads if not isinstance(threading.current_thread(), threading._MainThread): return global _SIGCHLD_handler_set if _SIGCHLD_handler_set: return previous_handler = signal.getsignal(signal.SIGCHLD) if not callable(previous_handler): previous_handler = None def handler(signum, frame): # This following call uses `waitid` with WNOHANG from C side. Therefore, # Python can still get and update the process status successfully. _error_if_any_worker_fails() if previous_handler is not None: previous_handler(signum, frame) signal.signal(signal.SIGCHLD, handler) _SIGCHLD_handler_set = True

Example 7

def make_web(queue): app = Flask(__name__) @app.route('/') def index(): return render_template('index.html') def gen(): while True: frame = queue.get() _, frame = cv2.imencode('.JPEG', frame) yield (b'--frame\r\n' b'Content-Type: image/jpeg\r\n\r\n' + frame.tostring() + b'\r\n') @app.route('/video_feed') def video_feed(): return Response(gen(), mimetype='multipart/x-mixed-replace; boundary=frame') try: app.run(host='0.0.0.0', port=8889) except: print('unable to open port')

Example 8

def act(self, action): if self.nthreads > 1: new = self.pool.map(env_step, zip(self.env, action)) else: new = [env.step(act) for env, act in zip(self.env, action)] reward = np.asarray([i[1] for i in new], dtype=np.float32) done = np.asarray([i[2] for i in new], dtype=np.float32) channels = self.state_.shape[1]//self.input_length state = np.zeros_like(self.state_) state[:,:-channels,:,:] = self.state_[:,channels:,:,:] for i, (ob, env) in enumerate(zip(new, self.env)): if ob[2]: state[i,-channels:,:,:] = env.reset().transpose((2,0,1)) else: state[i,-channels:,:,:] = ob[0].transpose((2,0,1)) self.state_ = state if self.web_viz: try: while self.queue.qsize() > 10: self.queue.get(False) except queue.Empty: pass frame = self.visual() self.queue.put(frame) return reward, done

Example 9

def get_msg(self, block=True, timeout=None): """ Gets a message if there is one that is ready. """ if timeout is None: # Queue.get(timeout=None) has stupid uninteruptible # behavior, so wait for a week instead timeout = 604800 return self._in_queue.get(block, timeout)

Example 10

def act(self, action): if self.nthreads > 1: new = self.pool.map(env_step, zip(self.env, action)) else: new = [env.step(act) for env, act in zip(self.env, action)] reward = np.asarray([i[1] for i in new], dtype=np.float32) done = np.asarray([i[2] for i in new], dtype=np.float32) channels = self.state_.shape[1]//self.input_length state = np.zeros_like(self.state_) state[:,:-channels,:,:] = self.state_[:,channels:,:,:] for i, (ob, env) in enumerate(zip(new, self.env)): if ob[2]: state[i,-channels:,:,:] = env.reset().transpose((2,0,1)) else: state[i,-channels:,:,:] = ob[0].transpose((2,0,1)) self.state_ = state if self.web_viz: try: while self.queue.qsize() > 10: self.queue.get(False) except queue.Empty: pass frame = self.visual() self.queue.put(frame) return reward, done

Example 11

def download_worker(): while True: url = queue.get() download_file(url, SAVE_DIR) queue.task_done() # Returns the path of the specified page number

Example 12

def worker(sess,model_options,model_vars,Queue,CLASS_DICT): while True: # print 'Queue Size', Queue.qsize() try: fname = Queue.get() except: return start = time.time() file_name_orig = fname.split(' ')[0].split('/')[1].strip() file_name = file_name_orig.replace('.avi','.npz') class_name = fname.split(' ')[0].split('/')[0].strip().lower() class_idx = CLASS_DICT[class_name] try: frames = np.load(model_options['data_dir']+file_name)['arr_0'] except: print "Couldn't Open: ",model_options['data_dir']+file_name Queue.task_done() continue idx = 0 if model_options['mode'] == 'train': idx = random.randint(0,frames.shape[0]-1) frames = frames[idx] tmpImg,tmpLab,num_crops = getCrops(sess,model_options,model_vars,frames,np.array((class_idx))) if model_options['mode'] == 'train': for j in range(num_crops): size = model_options['example_size'] sess.run(model_vars['enqueue_op'],feed_dict={model_vars['images']:tmpImg[j*size:(j+1)*size], model_vars['labels']:tmpLab[j:(j+1)]}) else: sess.run(model_vars['enqueue_op'],feed_dict={model_vars['images']:tmpImg, model_vars['labels']:tmpLab, model_vars['names']:[[file_name_orig]]*num_crops}) Queue.task_done()

Example 13

def get(self, **kwargs): """Get an item from the queue. Kwargs are ignored (often used in standard library queue.get calls)""" msg = self.queue.get(acknowledge=False) if msg is None: raise Empty return pickle.loads(msg.body)

Example 14

def multiprocess_configuration(n_cpus, pax_id, base_config_kwargs, processing_queue_kwargs, output_queue_kwargs): """Yields configuration override dicts for multiprocessing""" # Config overrides for child processes common_override = dict(pax=dict(autorun=True, show_progress_bar=False), DEFAULT=dict(pax_id=pax_id)) input_override = dict(pax=dict(plugin_group_names=['input', 'output'], encoder_plugin=None, decoder_plugin=None, output='Queues.PushToQueue'), Queues=dict(**processing_queue_kwargs)) worker_override = {'pax': dict(input='Queues.PullFromQueue', output='Queues.PushToQueue', event_numbers_file=None, events_to_process=None), # PullFromQueue can't have a timeout in the workers, see #444 'Queues.PullFromQueue': dict(timeout_after_sec=float('inf'), **processing_queue_kwargs), 'Queues.PushToQueue': dict(preserve_ids=True, many_to_one=True, **output_queue_kwargs)} output_override = dict(pax=dict(plugin_group_names=['input', 'output'], encoder_plugin=None, decoder_plugin=None, event_numbers_file=None, events_to_process=None, input='Queues.PullFromQueue'), Queues=dict(ordered_pull=True, **output_queue_kwargs)) overrides = [('input', input_override)] + [('worker', worker_override)] * n_cpus + [('output', output_override)] for worker_type, worker_overide in overrides: new_conf = deepcopy(base_config_kwargs) new_conf['config_dict'] = combine_configs(new_conf.get('config_dict'), common_override, worker_overide) yield worker_type, new_conf

Example 15

def check_local_processes_while_remote_processing(running_paxes, crash_fanout, terminate_host_on_crash=False): """Check on locally running paxes in running_paxes, returns list of remaining running pax processes. - Remove any paxes that have exited normally - If a pax has crashed, push a message to the crash fanout to terminate all paxes with the same id - Look for crash fanout messages from other processes, and terminate local paxes with the same id - terminate_host_on_crash: if True, raise exception in the host process if a pax crash is detected in a pax chain we're participating in. Do NOT use in a host process that can host multiple pax chains! We will not check the presence of other pax chains and terminate them too! """ p_by_status = group_by_status(running_paxes) running_paxes = p_by_status['running'] # If any of our own paxes crashed, send a message to the crash fanout # This will inform everyone connected to the server (including ourselves, on the next iteration) for crashed_w in p_by_status['crashed']: pax_id = crashed_w.pax_id exctype, traceb = get_exception_from_process(p_by_status['crashed'][0]) print("Pax %s crashed!\nDumping exception traceback:\n\n%s\n\nNotifying crash fanout." % ( pax_id, format_exception_dump(traceb) )) crash_fanout.put((pax_id, exctype, traceb)) running_paxes, _ = terminate_paxes_with_id(running_paxes, pax_id) if terminate_host_on_crash: raise exctype("Pax %s crashed! Traceback:\n %s" % (pax_id, format_exception_dump(traceb))) # If any of the remote paxes crashed, we will learn about it from the crash fanout. try: pax_id, exctype, traceb = crash_fanout.get() print("Remote crash notification for pax %s.\n" "Remote exception traceback dump:\n\n%s\n.Terminating paxes with id %s." % ( pax_id, format_exception_dump(traceb), pax_id)) running_paxes, n_terminated = terminate_paxes_with_id(running_paxes, pax_id) if n_terminated > 0 and terminate_host_on_crash: raise exctype("Pax %s crashed! Traceback:\n %s" % (pax_id, format_exception_dump(traceb))) except Empty: pass return running_paxes

Example 16

def get_exception_from_process(p): crdict = p.shared_dict try: exc_type = eval(crdict.get('exception_type', 'UnknownPropagatedException'), exceptions.__dict__) except NameError: exc_type = exceptions.UnknownPropagatedException traceb = crdict.get('traceback', 'No traceback reported') return exc_type, traceb

Example 17

def run(self): while self.alive.isSet(): try: # Queue.get with timeout to allow checking self.alive cmd = self.cmd_q.get(True, 0.1) self.handlers[cmd.type](cmd) except Queue.Empty as e: continue

Example 18

def _reduction_thread_fn(queue, group_id, device_ids, reduction_streams, nccl_streams): def _process_batch(): dev_grad_batch, dev_events, job_event = queue.get() dev_coalesced = [] # Coalesce the tensors on all devices and start a local reduction for dev_id, grad_batch, event, stream in zip(device_ids, dev_grad_batch, dev_events, reduction_streams): with torch.cuda.device(dev_id), torch.cuda.stream(stream): stream.wait_event(event) coalesced = _flatten_tensors(grad_batch) dev_coalesced.append(coalesced) # Wait for all copies to complete before starting the NCCL kernel for stream in reduction_streams: stream.synchronize() nccl.reduce(dev_coalesced, root=0, streams=nccl_streams) # From now on we're only going to work on the first device (from device_ids) grad_batch = dev_grad_batch[0] coalesced = dev_coalesced[0] reduce_stream = reduction_streams[0] with torch.cuda.stream(reduce_stream): reduce_stream.wait_stream(nccl_streams[0]) coalesced /= dist.get_world_size() dist.all_reduce(coalesced, group=group_id) for grad, reduced in zip(grad_batch, _unflatten_tensors(coalesced, grad_batch)): grad.copy_(reduced) job_event.set() with torch.cuda.device(device_ids[0]): while True: _process_batch() # just to have a clear scope

Example 19

def Pop(self, key): """Remove the object from the cache completely.""" node = self._hash.get(key) if node: self._age.Unlink(node) return node.data

Example 20

def __setstate__(self, state): self.__init__(max_size=state.get("max_size", 10))

Example 21

def _worker_loop(dataset, index_queue, data_queue, collate_fn, init_fn, worker_id): global _use_shared_memory _use_shared_memory = True # Intialize C side signal handlers for SIGBUS and SIGSEGV. Python signal # module's handlers are executed after Python returns from C low-level # handlers, likely when the same fatal signal happened again already. # https://docs.python.org/3/library/signal.html Sec. 18.8.1.1 _set_worker_signal_handlers() torch.set_num_threads(1) if init_fn is not None: init_fn(worker_id) watchdog = ManagerWatchdog() while True: try: r = index_queue.get(timeout=MANAGER_STATUS_CHECK_INTERVAL) except queue.Empty: if watchdog.is_alive(): continue else: break if r is None: break idx, batch_indices = r try: samples = collate_fn([dataset[i] for i in batch_indices]) except Exception: data_queue.put((idx, ExceptionWrapper(sys.exc_info()))) else: data_queue.put((idx, samples)) del samples

Example 22

def _get_batch(self): if self.timeout > 0: try: return self.data_queue.get(timeout=self.timeout) except queue.Empty: raise RuntimeError('DataLoader timed out after {} seconds'.format(self.timeout)) else: return self.data_queue.get()

Example 23

def _shutdown_workers(self): try: if not self.shutdown: self.shutdown = True self.done_event.set() for q in self.index_queues: q.put(None) # if some workers are waiting to put, make place for them try: while not self.worker_result_queue.empty(): self.worker_result_queue.get() except (FileNotFoundError, ImportError): # Many weird errors can happen here due to Python # shutting down. These are more like obscure Python bugs. # FileNotFoundError can happen when we rebuild the fd # fetched from the queue but the socket is already closed # from the worker side. # ImportError can happen when the unpickler loads the # resource from `get`. pass # done_event should be sufficient to exit worker_manager_thread, # but be safe here and put another None self.worker_result_queue.put(None) finally: # removes pids no matter what if self.worker_pids_set: _remove_worker_pids(id(self)) self.worker_pids_set = False

Example 24

def act(self, action): if self.nthreads > 1: new = self.pool.map(env_step, zip(self.env, action)) else: new = [env.step(act) for env, act in zip(self.env, action)] reward = np.asarray([i[1] for i in new], dtype=np.float32) done = np.asarray([i[2] for i in new], dtype=np.float32) channels = self.state_.shape[1]//self.input_length state = np.zeros_like(self.state_) state[:,:-channels,:,:] = self.state_[:,channels:,:,:] for i, (ob, env) in enumerate(zip(new, self.env)): if ob[2]: state[i,-channels:,:,:] = env.reset().transpose((2,0,1)) else: state[i,-channels:,:,:] = ob[0].transpose((2,0,1)) self.state_ = state if self.web_viz: try: while self.queue.qsize() > 10: self.queue.get(False) except queue.Empty: pass frame = self.visual() self.queue.put(frame) return reward, done

Example 25

def get_weight(self, which='last', include_baseline=False): """ Gets start and stop weights. TODO: add ability to get weights by session number, dates, and ranges. Args: which (str): if 'last', gets most recent weights. Otherwise returns all weights. include_baseline (bool): if True, includes baseline and minimum mass. Returns: dict """ # get either the last start/stop weights, optionally including baseline # TODO: Get by session weights = {} h5f = self.open_hdf() weight_table = h5f.root.history.weights if which == 'last': for column in weight_table.colnames: try: weights[column] = weight_table.read(-1, field=column)[0] except IndexError: weights[column] = None else: for column in weight_table.colnames: try: weights[column] = weight_table.read(field=column) except IndexError: weights[column] = None if include_baseline is True: try: baseline = float(h5f.root.info._v_attrs['baseline_mass']) except KeyError: baseline = 0.0 minimum = baseline*0.8 weights['baseline_mass'] = baseline weights['minimum_mass'] = minimum self.close_hdf(h5f) return weights
Sours: https://www.programcreek.com/python/example/4219/Queue.get
  1. Google cloud functions context
  2. Hicks gas pay online
  3. Light fade haircut
  4. Chase bank line avenue
  5. Solar post cap

17.7. — A synchronized queue class¶

Source code:Lib/queue.py


The module implements multi-producer, multi-consumer queues. It is especially useful in threaded programming when information must be exchanged safely between multiple threads. The class in this module implements all the required locking semantics. It depends on the availability of thread support in Python; see the module.

The module implements three types of queue, which differ only in the order in which the entries are retrieved. In a FIFO queue, the first tasks added are the first retrieved. In a LIFO queue, the most recently added entry is the first retrieved (operating like a stack). With a priority queue, the entries are kept sorted (using the module) and the lowest valued entry is retrieved first.

Internally, the module uses locks to temporarily block competing threads; however, it is not designed to handle reentrancy within a thread.

The module defines the following classes and exceptions:

class (maxsize=0

Constructor for a FIFO queue. maxsize is an integer that sets the upperbound limit on the number of items that can be placed in the queue. Insertion will block once this size has been reached, until queue items are consumed. If maxsize is less than or equal to zero, the queue size is infinite.

class (maxsize=0

Constructor for a LIFO queue. maxsize is an integer that sets the upperbound limit on the number of items that can be placed in the queue. Insertion will block once this size has been reached, until queue items are consumed. If maxsize is less than or equal to zero, the queue size is infinite.

class (maxsize=0

Constructor for a priority queue. maxsize is an integer that sets the upperbound limit on the number of items that can be placed in the queue. Insertion will block once this size has been reached, until queue items are consumed. If maxsize is less than or equal to zero, the queue size is infinite.

The lowest valued entries are retrieved first (the lowest valued entry is the one returned by ). A typical pattern for entries is a tuple in the form: .

exception

Exception raised when non-blocking (or ) is called on a object which is empty.

exception

Exception raised when non-blocking (or ) is called on a object which is full.

17.7.1. Queue Objects¶

Queue objects (, , or ) provide the public methods described below.

()¶

Return the approximate size of the queue. Note, qsize() > 0 doesn’t guarantee that a subsequent get() will not block, nor will qsize() < maxsize guarantee that put() will not block.

()¶

Return if the queue is empty, otherwise. If empty() returns it doesn’t guarantee that a subsequent call to put() will not block. Similarly, if empty() returns it doesn’t guarantee that a subsequent call to get() will not block.

()¶

Return if the queue is full, otherwise. If full() returns it doesn’t guarantee that a subsequent call to get() will not block. Similarly, if full() returns it doesn’t guarantee that a subsequent call to put() will not block.

(item, block=True, timeout=None

Put item into the queue. If optional args block is true and timeout is (the default), block if necessary until a free slot is available. If timeout is a positive number, it blocks at most timeout seconds and raises the exception if no free slot was available within that time. Otherwise (block is false), put an item on the queue if a free slot is immediately available, else raise the exception (timeout is ignored in that case).

(item

Equivalent to .

(block=True, timeout=None

Remove and return an item from the queue. If optional args block is true and timeout is (the default), block if necessary until an item is available. If timeout is a positive number, it blocks at most timeout seconds and raises the exception if no item was available within that time. Otherwise (block is false), return an item if one is immediately available, else raise the exception (timeout is ignored in that case).

()¶

Equivalent to .

Two methods are offered to support tracking whether enqueued tasks have been fully processed by daemon consumer threads.

()¶

Indicate that a formerly enqueued task is complete. Used by queue consumer threads. For each used to fetch a task, a subsequent call to tells the queue that the processing on the task is complete.

If a is currently blocking, it will resume when all items have been processed (meaning that a call was received for every item that had been into the queue).

Raises a if called more times than there were items placed in the queue.

()¶

Blocks until all items in the queue have been gotten and processed.

The count of unfinished tasks goes up whenever an item is added to the queue. The count goes down whenever a consumer thread calls to indicate that the item was retrieved and all work on it is complete. When the count of unfinished tasks drops to zero, unblocks.

Example of how to wait for enqueued tasks to be completed:

defworker():whileTrue:item=q.get()ifitemisNone:breakdo_work(item)q.task_done()q=queue.Queue()threads=[]foriinrange(num_worker_threads):t=threading.Thread(target=worker)t.start()threads.append(t)foriteminsource():q.put(item)# block until all tasks are doneq.join()# stop workersforiinrange(num_worker_threads):q.put(None)fortinthreads:t.join()
Sours: https://python.readthedocs.io/en/latest/library/queue.html

What is Python Queue?

A queue is a container that holds data. The data that is entered first will be removed first, and hence a queue is also called “First in First Out” (FIFO). The queue has two ends front and rear. The items are entered from the rear and removed from the front side.

In this Python tutorial, you will learn:

How does Python Queue Work?

The queue can be easily compared with the real-world example the line of people waiting in a queue at the ticket counter, the person standing first will get the ticket first, followed by the next person and so on. The same logic goes for the queue data structure too.

Here is a diagrammatic representation of queue:

The Rear represents the point where the items are inserted inside the queue. In this example, 7 is value for that.

The Front represents the point where the items from the queue will be removed. If you remove an item from the queue, the first element you will get is 1, as shown in the figure.

Item 1 was the first one to be inserted in the queue, and while removing it is the first one to come out. Hence the queue is called FIRST IN FIRST OUT (FIFO)

In a queue, the items are removed in order and cannot be removed from in between. You just cannot remove the item 5 randomly from the queue, to do that you will have to remove all the items before 5. The items in queue will be removed in the order they are inserted.

Types of Queue in Python

There are mainly two types of queue in Python:

  • First in First out Queue: For this, the element that goes first will be the first to come out.

    To work with FIFO, you have to call Queue() class from queue module.

  • Last in First out Queue: Over here, the element that is entered last will be the first to come out.

    To work with LIFO, you have to call LifoQueue() class from the queue module.

Python queue Installation

It is very easy to work with queue in python. Here are the steps to follow to make use of queue in your code.

Step 1) You just have to import the queue module, as shown below:

import queue

The module is available by default with python, and you don’t need any additional installation to start working with the queue. There are 2 types of queue FIFO (first in first out) and LIFO (last in first out).

Step 2) To work with FIFO queue , call the Queue class using the queue module imported as shown below:

import queue q1 = queue.Queue()

Step 3) To work with LIFO queue call the LifoQueue() class as shown below:

import queue q1 = queue.LifoQueue()

Methods available inside Queue and LifoQueue class

Following are the important methods available inside Queue and LifoQueue class:

  • put(item): This will put the item inside the queue.
  • get(): This will return you an item from the queue.
  • empty(): It will return true if the queue is empty and false if items are present.
  • qsize(): returns the size of the queue.
  • full(): returns true if the queue is full, otherwise false.

First In First Out Queue Example

In the case of first in first out, the element that goes first will be the first to come out.

Add and item in a queue

Let us work on an example to add an item in a queue. To start working with the queue, first import the module queue, as shown in the example below.

To add an item , you can make use of put() method as shown in the example:

import queue q1 = queue.Queue() q1.put(10) #this will additem 10 to the queue.

By default, the size of the queue is infinite and you can add any number of items to it. In case you want to define the size of the queue the same can be done as follows

import queue q1 = queue.Queue(5) #The max size is 5. q1.put(1) q1.put(2) q1.put(3) q1.put(4) q1.put(5) print(q1.full()) # will return true.

Output:

True

Now the size of the queue is 5, and it will not take more than 5 items, and the method q1.full() will return true. Adding any more items will not execute the code any further.

Remove an item from the queue

To remove an item from the queue, you can use the method called get(). This method allows items from the queue when called.

The following example shows how to remove an item from the queue.

import queue q1 = queue.Queue() q1.put(10) item1 = q1.get() print('The item removed from the queue is ', item1)

Output:

The item removed from the queue is 10

Last In First Out queue Example

In the case of last in the first out queue, the element that is entered last will be the first to come out.

To work with LIFO, i.e., last in the first out queue, we need to import the queue module and make use of the LifoQueue() method.

Add and item in a queue

Here we will understand how to add an item to the LIFO queue.

import queue q1 = queue.LifoQueue() q1.put(10)

You have to use the put() method on LifoQueue, as shown in the above example.

Remove an item from the queue

To remove an item from the LIFOqueue you can make use of get() method .

import queue q1 = queue.LifoQueue() q1.put(10) item1 = q1.get() print('The item removed from the LIFO queue is ', item1)

Output:

The item removed from the LIFO queue is 10

Add more than 1 item in a Queue

In the above examples, we have seen how to add a single item and remove the item for FIFO and LIFOqueue. Now we will see how to add more than one item and also remove it.

Add and item in a FIFOqueue

import queue q1 = queue.Queue() for i in range(20): q1.put(i) # this will additem from 0 to 20 to the queue

Remove an item from the FIFOqueue

import queue q1 = queue.Queue() for i in range(20): q1.put(i) # this will additem from 0 to 20 to the queue while not q1.empty(): print("The value is ", q1.get()) # get() will remove the item from the queue.

Output:

The value is 0 The value is 1 The value is 2 The value is 3 The value is 4 The value is 5 The value is 6 The value is 7 The value is 8 The value is 9 The value is 10 The value is 11 The value is 12 The value is 13 The value is 14 The value is 15 The value is 16 The value is 17 The value is 18 The value is 19

Add and item in a LIFOqueue

import queue q1 = queue.LifoQueue() for i in range(20): q1.put(i) # this will additem from 0 to 20 to the queue

Remove an item from the LIFOqueue

import queue q1 = queue.LifoQueue() for i in range(20): q1.put(i) # this will additem from 0 to 20 to the queue while not q1.empty(): print("The value is ", q1.get()) # get() will remove the item from the queue.

Output:

The value is 19 The value is 18 The value is 17 The value is 16 The value is 15 The value is 14 The value is 13 The value is 12 The value is 11 The value is 10 The value is 9 The value is 8 The value is 7 The value is 6 The value is 5 The value is 4 The value is 3 The value is 2 The value is 1 The value is 0

Sorting Queue

Following example shows the queue sorting.The algorithm used for sorting is bubble sort.

import queue q1 = queue.Queue() #Addingitems to the queue q1.put(11) q1.put(5) q1.put(4) q1.put(21) q1.put(3) q1.put(10) #using bubble sort on the queue n = q1.qsize() for i in range(n): x = q1.get() # the element is removed for j in range(n-1): y = q1.get() # the element is removed if x > y : q1.put(y) #the smaller one is put at the start of the queue else: q1.put(x) # the smaller one is put at the start of the queue x = y # the greater one is replaced with x and compared again with nextelement q1.put(x) while (q1.empty() == False): print(q1.queue[0], end = " ") q1.get()

Output:

3 4 5 10 11 21

Reversing Queue

To reverse the queue, you can make use of another queue and recursion.

The following example shows how to get the queue reversed.

Example:

import queue q1 = queue.Queue() q1.put(11) q1.put(5) q1.put(4) q1.put(21) q1.put(3) q1.put(10) def reverseQueue (q1src, q2dest) : buffer = q1src.get() if (q1src.empty() == False) : reverseQueue(q1src, q2dest) #using recursion q2dest.put(buffer) return q2dest q2dest = queue.Queue() qReversed = reverseQueue(q1,q2dest) while (qReversed.empty() == False): print(qReversed.queue[0], end = " ") qReversed.get()

Output:

10 3 21 4 5 11

Summary:

  • A queue is a container that holds data. There are two types of Queue, FIFO, and LIFO.
  • For a FIFO (First in First out Queue), the element that goes first will be the first to come out.
  • For a LIFO (Last in First out Queue), the element that is entered last will be the first to come out.
  • An item in a queue is added using the put(item) method.
  • To remove an item, get() method is used.
Sours: https://www.guru99.com/python-queue-example.html

Get python queue

The Put() Method Of Queue Class In Python

# Example Python program that uses put() method

# to add elements to a queue.Queue instance

import queue

import threading

import os

import sys

import random

import time

 

try:

    # Function for writer thread

    def writer(sq):

        while(True):

            data = random.randint(0, 9999);

            time.sleep(1);

            sq.put(data);

            print("writer:added %d"%data);

 

    # Function for reader thread

    def reader(sq):

        while(True):

            data = sq.get();

            time.sleep(1);

            print("reader:removed %d"%data);

 

    # Create a synchronized queue instance

    sharedQueue = queue.Queue();

 

    # Create reader and writer threads

    threads     = [None, None];

    threads[0]  = threading.Thread(target=reader, args=(sharedQueue,));

    threads[1]  = threading.Thread(target=writer, args=(sharedQueue,));

 

    # Start the reader and writer threads

    for thread in threads:

        thread.start();

 

    # Wait for the reader and writer threads to exit

    for thread in threads:

        thread.join();

 

except KeyboardInterrupt:   

    print('Keyboard interrupt received from user');

    try:

        sys.exit(0);

    except:

        os._exit(0);

Sours: https://pythontic.com/queue-module/queue-class/put
Python Tutorials : Threading Beginners Tutorial- Queue (part 6-1)

Queue in Python

Like stack, queue is a linear data structure that stores items in First In First Out (FIFO) manner. With a queue the least recently added item is removed first. A good example of queue is any queue of consumers for a resource where the consumer that came first is served first.
 

Queue in Python

 Attention geek! Strengthen your foundations with the Python Programming Foundation Course and learn the basics.  

To begin with, your interview preparations Enhance your Data Structures concepts with the Python DS Course. And to begin with your Machine Learning Journey, join the Machine Learning - Basic Level Course

Operations associated with queue are: 
 

  • Enqueue: Adds an item to the queue. If the queue is full, then it is said to be an Overflow condition – Time Complexity : O(1)
  • Dequeue: Removes an item from the queue. The items are popped in the same order in which they are pushed. If the queue is empty, then it is said to be an Underflow condition – Time Complexity : O(1)
  • Front: Get the front item from queue – Time Complexity : O(1)
  • Rear: Get the last item from queue – Time Complexity : O(1)

 

Implementation

There are various ways to implement a queue in Python. This article covers the implementation of queue using data structures and modules from Python library.
Queue in Python can be implemented by the following ways:
 

  • list
  • collections.deque
  • queue.Queue

 

Implementation using list

List is a Python’s built-in data structure that can be used as a queue. Instead of enqueue() and dequeue(), append() and pop() function is used. However, lists are quite slow for this purpose because inserting or deleting an element at the beginning requires shifting all of the other elements by one, requiring O(n) time.
 

Python3

 

 

 

 

 

 

Output: 
 

Initial queue ['a', 'b', 'c'] Elements dequeued from queue a b c Queue after removing elements []

 

Traceback (most recent call last): File "/home/ef51acf025182ccd69d906e58f17b6de.py", line 25, in print(queue.pop(0)) IndexError: pop from empty list

 

Implementation using collections.deque

Queue in Python can be implemented using deque class from the collections module. Deque is preferred over list in the cases where we need quicker append and pop operations from both the ends of container, as deque provides an O(1) time complexity for append and pop operations as compared to list which provides O(n) time complexity. Instead of enqueue and deque, append() and popleft() functions are used.
 

Python3

 

 

 

 

 

 

 

 

Output: 
 

Initial queue deque(['a', 'b', 'c']) Elements dequeued from the queue a b c Queue after removing elements deque([])

 

Traceback (most recent call last): File "/home/b2fa8ce438c2a9f82d6c3e5da587490f.py", line 23, in q.popleft() IndexError: pop from an empty deque

 

Implementation using queue.Queue

Queue is built-in module of Python which is used to implement a queue. queue.Queue(maxsize) initializes a variable to a maximum size of maxsize. A maxsize of zero ‘0’ means a infinite queue. This Queue follows FIFO rule. 
There are various functions available in this module: 
 

  • maxsize – Number of items allowed in the queue.
  • empty() – Return True if the queue is empty, False otherwise.
  • full() – Return True if there are maxsize items in the queue. If the queue was initialized with maxsize=0 (the default), then full() never returns True.
  • get() – Remove and return an item from the queue. If queue is empty, wait until an item is available.
  • get_nowait() – Return an item if one is immediately available, else raise QueueEmpty.
  • put(item) – Put an item into the queue. If the queue is full, wait until a free slot is available before adding the item.
  • put_nowait(item) – Put an item into the queue without blocking. If no free slot is immediately available, raise QueueFull.
  • qsize() – Return the number of items in the queue.

 

Python3

 

 

 

 

 

 

 

 

 

 

Output: 
 

0 Full: True Elements dequeued from the queue a b c Empty: True Empty: False Full: False

 


Sours: https://www.geeksforgeeks.org/queue-in-python/

You will also like:

— A synchronized queue class¶

Source code:Lib/queue.py


The module implements multi-producer, multi-consumer queues. It is especially useful in threaded programming when information must be exchanged safely between multiple threads. The class in this module implements all the required locking semantics.

The module implements three types of queue, which differ only in the order in which the entries are retrieved. In a FIFO queue, the first tasks added are the first retrieved. In a LIFO queue, the most recently added entry is the first retrieved (operating like a stack). With a priority queue, the entries are kept sorted (using the module) and the lowest valued entry is retrieved first.

Internally, those three types of queues use locks to temporarily block competing threads; however, they are not designed to handle reentrancy within a thread.

In addition, the module implements a “simple” FIFO queue type, , whose specific implementation provides additional guarantees in exchange for the smaller functionality.

The module defines the following classes and exceptions:

class (maxsize=0

Constructor for a FIFO queue. maxsize is an integer that sets the upperbound limit on the number of items that can be placed in the queue. Insertion will block once this size has been reached, until queue items are consumed. If maxsize is less than or equal to zero, the queue size is infinite.

class (maxsize=0

Constructor for a LIFO queue. maxsize is an integer that sets the upperbound limit on the number of items that can be placed in the queue. Insertion will block once this size has been reached, until queue items are consumed. If maxsize is less than or equal to zero, the queue size is infinite.

class (maxsize=0

Constructor for a priority queue. maxsize is an integer that sets the upperbound limit on the number of items that can be placed in the queue. Insertion will block once this size has been reached, until queue items are consumed. If maxsize is less than or equal to zero, the queue size is infinite.

The lowest valued entries are retrieved first (the lowest valued entry is the one returned by ). A typical pattern for entries is a tuple in the form: .

If the data elements are not comparable, the data can be wrapped in a class that ignores the data item and only compares the priority number:

fromdataclassesimportdataclass,[email protected](order=True)classPrioritizedItem:priority:intitem:Any=field(compare=False)
class

Constructor for an unbounded FIFO queue. Simple queues lack advanced functionality such as task tracking.

exception

Exception raised when non-blocking (or ) is called on a object which is empty.

exception

Exception raised when non-blocking (or ) is called on a object which is full.

Queue Objects¶

Queue objects (, , or ) provide the public methods described below.

()¶

Return the approximate size of the queue. Note, qsize() > 0 doesn’t guarantee that a subsequent get() will not block, nor will qsize() < maxsize guarantee that put() will not block.

()¶

Return if the queue is empty, otherwise. If empty() returns it doesn’t guarantee that a subsequent call to put() will not block. Similarly, if empty() returns it doesn’t guarantee that a subsequent call to get() will not block.

()¶

Return if the queue is full, otherwise. If full() returns it doesn’t guarantee that a subsequent call to get() will not block. Similarly, if full() returns it doesn’t guarantee that a subsequent call to put() will not block.

(item, block=True, timeout=None

Put item into the queue. If optional args block is true and timeout is (the default), block if necessary until a free slot is available. If timeout is a positive number, it blocks at most timeout seconds and raises the exception if no free slot was available within that time. Otherwise (block is false), put an item on the queue if a free slot is immediately available, else raise the exception (timeout is ignored in that case).

(item

Equivalent to .

(block=True, timeout=None

Remove and return an item from the queue. If optional args block is true and timeout is (the default), block if necessary until an item is available. If timeout is a positive number, it blocks at most timeout seconds and raises the exception if no item was available within that time. Otherwise (block is false), return an item if one is immediately available, else raise the exception (timeout is ignored in that case).

Prior to 3.0 on POSIX systems, and for all versions on Windows, if block is true and timeout is , this operation goes into an uninterruptible wait on an underlying lock. This means that no exceptions can occur, and in particular a SIGINT will not trigger a .

()¶

Equivalent to .

Two methods are offered to support tracking whether enqueued tasks have been fully processed by daemon consumer threads.

()¶

Indicate that a formerly enqueued task is complete. Used by queue consumer threads. For each used to fetch a task, a subsequent call to tells the queue that the processing on the task is complete.

If a is currently blocking, it will resume when all items have been processed (meaning that a call was received for every item that had been into the queue).

Raises a if called more times than there were items placed in the queue.

()¶

Blocks until all items in the queue have been gotten and processed.

The count of unfinished tasks goes up whenever an item is added to the queue. The count goes down whenever a consumer thread calls to indicate that the item was retrieved and all work on it is complete. When the count of unfinished tasks drops to zero, unblocks.

Example of how to wait for enqueued tasks to be completed:

importthreading,queueq=queue.Queue()defworker():whileTrue:item=q.get()print(f'Working on {item}')print(f'Finished {item}')q.task_done()# turn-on the worker threadthreading.Thread(target=worker,daemon=True).start()# send thirty task requests to the workerforiteminrange(30):q.put(item)print('All task requests sent\n',end='')# block until all tasks are doneq.join()print('All work completed')

SimpleQueue Objects¶

objects provide the public methods described below.

()¶

Return the approximate size of the queue. Note, qsize() > 0 doesn’t guarantee that a subsequent get() will not block.

()¶

Return if the queue is empty, otherwise. If empty() returns it doesn’t guarantee that a subsequent call to get() will not block.

(item, block=True, timeout=None

Put item into the queue. The method never blocks and always succeeds (except for potential low-level errors such as failure to allocate memory). The optional args block and timeout are ignored and only provided for compatibility with .

CPython implementation detail: This method has a C implementation which is reentrant. That is, a or call can be interrupted by another call in the same thread without deadlocking or corrupting internal state inside the queue. This makes it appropriate for use in destructors such as methods or callbacks.

(item

Equivalent to , provided for compatibility with .

(block=True, timeout=None

Remove and return an item from the queue. If optional args block is true and timeout is (the default), block if necessary until an item is available. If timeout is a positive number, it blocks at most timeout seconds and raises the exception if no item was available within that time. Otherwise (block is false), return an item if one is immediately available, else raise the exception (timeout is ignored in that case).

()¶

Equivalent to .

See also

Class

A queue class for use in a multi-processing (rather than multi-threading) context.

is an alternative implementation of unbounded queues with fast atomic and operations that do not require locking and also support indexing.

Sours: https://docs.python.org/3/library/queue.html


20926 20927 20928 20929 20930