TY - JOUR T1 - Gradient-based Image Recovery Methods from Incomplete Fourier Measurements JF - IEEE Transactions on Image Processing Y1 - 2012 A1 - Patel, Vishal M. A1 - Maleh,R. A1 - Gilbert,A. C A1 - Chellapa, Rama KW - Compressed sensing KW - Fourier transforms KW - Image coding KW - Image edge detection KW - Image reconstruction KW - L1–minimization KW - minimization KW - Noise measurement KW - OPTIMIZATION KW - Poisson solver KW - Sparse recovery KW - Total variation KW - TV AB - A major problem in imaging applications such as Magnetic Resonance Imaging (MRI) and Synthetic Aperture Radar (SAR) is the task of trying to reconstruct an image with the smallest possible set of Fourier samples, every single one of which has a potential time and/or power cost. The theory of Compressive Sensing (CS) points to ways of exploiting inherent sparsity in such images in order to achieve accurate recovery using sub- Nyquist sampling schemes. Traditional CS approaches to this problem consist of solving total-variation minimization programs with Fourier measurement constraints or other variations thereof. This paper takes a different approach: Since the horizontal and vertical differences of a medical image are each more sparse or compressible than the corresponding total-variational image, CS methods will be more successful in recovering these differences individually. We develop an algorithm called GradientRec that uses a CS algorithm to recover the horizontal and vertical gradients and then estimates the original image from these gradients. We present two methods of solving the latter inverse problem: one based on least squares optimization and the other based on a generalized Poisson solver. After a thorough derivation of our complete algorithm, we present the results of various experiments that compare the effectiveness of the proposed method against other leading methods. VL - PP SN - 1057-7149 CP - 99 M3 - 10.1109/TIP.2011.2159803 ER - TY - CONF T1 - Optimizing epidemic protection for socially essential workers T2 - Proceedings of the 2nd ACM SIGHIT International Health Informatics Symposium Y1 - 2012 A1 - Barrett,Chris A1 - Beckman,Richard A1 - Bisset,Keith A1 - Chen,Jiangzhuo A1 - DuBois,Thomas A1 - Eubank,Stephen A1 - Kumar,V. S. Anil A1 - Lewis,Bryan A1 - Marathe,Madhav V. A1 - Srinivasan, Aravind A1 - Stretz,Paula E. KW - epidemiology KW - OPTIMIZATION KW - public health informatics AB - Public-health policy makers have many tools to mitigate an epidemic's effects. Most related research focuses on the direct effects on those infected (in terms of health, life, or productivity). Interventions including treatment, prophylaxis, quarantine, and social distancing are well studied in this context. These interventions do not address indirect effects due to the loss of critical services and infrastructures when too many of those responsible for their day-to-day operations fall ill. We examine, both analytically and through simulation, the protection of such essential subpopulations by sequestering them, effectively isolating them into groups during an epidemic. We develop a framework for studying the benefits of sequestering and heuristics for when to sequester. We also prove a key property of sequestering placement which helps partition the subpopulations optimally. Thus we provide a first step toward determining how to allocate resources between the direct protection of a population, and protection of those responsible for critical services. JA - Proceedings of the 2nd ACM SIGHIT International Health Informatics Symposium T3 - IHI '12 PB - ACM CY - New York, NY, USA SN - 978-1-4503-0781-9 UR - http://doi.acm.org/10.1145/2110363.2110371 M3 - 10.1145/2110363.2110371 ER - TY - RPRT T1 - Dataflow-Based Implementation of Layered Sensing Applications Y1 - 2011 A1 - Bhattacharyya, Shuvra S. A1 - Chung-Ching Shen A1 - Plishker,William A1 - Sane, Nimish A1 - Wu, Hsiang-Huang A1 - Gu, Ruirui KW - *COMPUTER AIDED DESIGN KW - *DATA FUSION KW - *DATAFLOW KW - *LAYERS KW - *SOFTWARE TOOLS KW - COMPUTER PROGRAMMING AND SOFTWARE KW - DETECTION KW - High performance computing KW - LAYERED SENSING KW - OPTIMIZATION KW - Signal processing KW - synthesis KW - T2KA AB - This report describes a new dataflow-based technology and associated design tools for high-productivity design, analysis, and optimization of layered sensing software for signal processing systems. Our approach provides novel capabilities, based on the principles of task-level dataflow analysis, for exploring and optimizing interactions across application behavior; operational context; high performance embedded processing platforms, and implementation constraints. Particularly, we introduce and deliver novel software tools, called the targeted dataflow interchange format (TDIF) and Dataflow Interchange Format Markup Language (DIFML), for design and implementation of layered sensing and signal processing systems. The TDIF-CUDA (Compute Unified Device Architecture) environment is a graphics processing unit targeted software synthesis tool that provides a unique integration of dynamic dataflow modeling; retargetable actor construction; software synthesis; and instrumentation-based schedule evaluation and tuning. The DIFML package is a software package for the DIFML format, which is an Extensible Markup Language (XML)-based format for exchanging information between DIF and other tools. ER - TY - CONF T1 - Design methods for Wireless Sensor Network Building Energy Monitoring Systems T2 - 2011 IEEE 36th Conference on Local Computer Networks (LCN) Y1 - 2011 A1 - Cho, Inkeun A1 - Chung-Ching Shen A1 - Potbhare, S. A1 - Bhattacharyya, Shuvra S. A1 - Goldsman,N. KW - Analytical models KW - application-level interfacing behavior KW - building energy monitoring system KW - Buildings KW - dataflow technique KW - embedded sensor node KW - energy analysis method KW - Energy consumption KW - energy management systems KW - Energy resolution KW - IEEE 802.15.4 MAC functionality KW - Monitoring KW - OPTIMIZATION KW - wireless sensor network KW - Wireless sensor networks KW - WSNBEMS KW - Zigbee AB - In this paper, we present a new energy analysis method for evaluating energy consumption of embedded sensor nodes at the application level and the network level. Then we apply the proposed energy analysis method to develop new energy management schemes in order to maximize lifetime for Wireless Sensor Network Building Energy Monitoring Systems (WSNBEMS). At the application level, we develop a new design approach that uses dataflow techniques to model the application-level interfacing behavior between the processor and sensors on an embedded sensor node. At the network level, we analyze the energy consumption of the IEEE 802.15.4 MAC functionality. Based on our techniques for modeling and energy analysis, we have implemented an optimized WSNBEMS for a real building, and validated our energy analysis techniques through measurements on this implementation. The performance of our implementation is also evaluated in terms of monitoring accuracy and energy consumption savings. We have demonstrated that by applying the proposed scheme, system lifetime can be improved significantly without affecting monitoring accuracy. JA - 2011 IEEE 36th Conference on Local Computer Networks (LCN) ER - TY - CONF T1 - Maximizing Expected Utility for Stochastic Combinatorial Optimization Problems T2 - 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science (FOCS) Y1 - 2011 A1 - Li,Jian A1 - Deshpande, Amol KW - Approximation algorithms KW - Approximation methods KW - combinatorial problems KW - Fourier series KW - knapsack problems KW - optimisation KW - OPTIMIZATION KW - polynomial approximation KW - polynomial time approximation algorithm KW - Polynomials KW - Random variables KW - stochastic combinatorial optimization KW - stochastic knapsack KW - stochastic shortest path KW - stochastic spanning tree KW - vectors AB - We study the stochastic versions of a broad class of combinatorial problems where the weights of the elements in the input dataset are uncertain. The class of problems that we study includes shortest paths, minimum weight spanning trees, and minimum weight matchings over probabilistic graphs, and other combinatorial problems like knapsack. We observe that the expected value is inadequate in capturing different types of risk averse or risk-prone behaviors, and instead we consider a more general objective which is to maximize the expected utility of the solution for some given utility function, rather than the expected weight (expected weight becomes a special case). We show that we can obtain a polynomial time approximation algorithm with additive error ϵ for any ϵ >; 0, if there is a pseudopolynomial time algorithm for the exact version of the problem (This is true for the problems mentioned above) and the maximum value of the utility function is bounded by a constant. Our result generalizes several prior results on stochastic shortest path, stochastic spanning tree, and stochastic knapsack. Our algorithm for utility maximization makes use of the separability of exponential utility and a technique to decompose a general utility function into exponential utility functions, which may be useful in other stochastic optimization problems. JA - 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science (FOCS) PB - IEEE SN - 978-1-4577-1843-4 M3 - 10.1109/FOCS.2011.33 ER - TY - CONF T1 - Online allocation of display advertisements subject to advanced sales contracts T2 - Proceedings of the Third International Workshop on Data Mining and Audience Intelligence for Advertising Y1 - 2009 A1 - Alaei,Saeed A1 - Arcaute,Esteban A1 - Khuller, Samir A1 - Ma,Wenjing A1 - Malekian,Azarakhsh A1 - Tomlin,John KW - display advertising KW - modeling KW - online algorithms KW - OPTIMIZATION KW - simulation AB - In this paper we propose a utility model that accounts for both sales and branding advertisers. We first study the computational complexity of optimization problems related to both online and offline allocation of display advertisements. Next, we focus on a particular instance of the online allocation problem, and design a simple online algorithm with provable approximation guarantees. Our algorithm is near optimal as is shown by a matching lower bound. Finally, we report on experiments to establish actual case behavior on some real datasets, with encouraging results. JA - Proceedings of the Third International Workshop on Data Mining and Audience Intelligence for Advertising T3 - ADKDD '09 PB - ACM CY - New York, NY, USA SN - 978-1-60558-671-7 UR - http://doi.acm.org/10.1145/1592748.1592758 M3 - 10.1145/1592748.1592758 ER - TY - CONF T1 - A Scalable Projective Bundle Adjustment Algorithm using the L infinity Norm T2 - Computer Vision, Graphics Image Processing, 2008. ICVGIP '08. Sixth Indian Conference on Y1 - 2008 A1 - Mitra, K. A1 - Chellapa, Rama KW - adjustment KW - algorithm;structure KW - bundle KW - complexity;convex KW - complexity;iteration KW - error;scalable KW - estimation; KW - estimation;cameras;computational KW - Linfin KW - method;large KW - methods;parameter KW - norm;camera;computational KW - OPTIMIZATION KW - parameter KW - problem;memory KW - problem;projection KW - problem;reprojection KW - programming;image KW - projective KW - reconstruction;iterative KW - reconstruction;quasiconvex KW - requirement;motion KW - scale AB - The traditional bundle adjustment algorithm for structure from motion problem has a computational complexity of O((m+n)3) per iteration and memory requirement of O(mn(m+n)), where m is the number of cameras and n is the number of structure points. The sparse version of bundle adjustment has a computational complexity of O(m3+mn) per iteration and memory requirement of O(mn). Here we propose an algorithm that has a computational complexity of O(mn(radicm+radicn)) per iteration and memory requirement of O(max(m,n)). The proposed algorithm is based on minimizing the Linfin norm of reprojection error. It alternately estimates the camera and structure parameters, thus reducing the potentially large scale optimization problem to many small scale subproblems each of which is a quasi-convex optimization problem and hence can be solved globally. Experiments using synthetic and real data show that the proposed algorithm gives good performance in terms of minimizing the reprojection error and also has a good convergence rate. JA - Computer Vision, Graphics Image Processing, 2008. ICVGIP '08. Sixth Indian Conference on M3 - 10.1109/ICVGIP.2008.51 ER - TY - CONF T1 - Usage-based dhcp lease time optimization T2 - Proceedings of the 7th ACM SIGCOMM conference on Internet measurement Y1 - 2007 A1 - Khadilkar,Manas A1 - Feamster, Nick A1 - Sanders,Matt A1 - Clark,Russ KW - dhcp KW - network management KW - OPTIMIZATION KW - usage AB - The Dynamic Host Configuration Protocol (DHCP) is used to dynamically allocate address space to hosts on a local area network. Despite its widespread usage, few studies exist on DHCP usage patterns, and even less is known about the importance of setting the lease time (the time that a client retains ownership over some IP address) to an appropriate value. Lease time can greatly affect the tradeoff between address space utilization and the number of both renewal messages and client session expirations. In this paper, using a DHCP trace for 5 weekdays from the Georgia Tech campus network, we present the largest known study of DHCP utilization. We also explore how various strategies for setting lease times can dramatically reduce the number of renewals and expirations without prohibitively increasing address space utilization. JA - Proceedings of the 7th ACM SIGCOMM conference on Internet measurement T3 - IMC '07 PB - ACM CY - New York, NY, USA SN - 978-1-59593-908-1 UR - http://doi.acm.org/10.1145/1298306.1298315 M3 - 10.1145/1298306.1298315 ER - TY - CONF T1 - Differentiated traffic engineering for QoS provisioning T2 - INFOCOM 2005. 24th Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings IEEE Y1 - 2005 A1 - Tabatabaee,V. A1 - Bhattacharjee, Bobby A1 - La,R.J. A1 - Shayman,M.A. KW - based KW - Computer KW - differentiated KW - DiffServ KW - DTE; KW - engineering; KW - evaluation; KW - link; KW - links; KW - management; KW - multipath KW - network KW - networks; KW - nonconvex KW - of KW - optimisation; KW - OPTIMIZATION KW - packet KW - performance KW - problem; KW - provisioning; KW - QoS KW - QUALITY KW - routing; KW - service; KW - simulation-based KW - source KW - Telecommunication KW - traffic KW - traffic; AB - We introduce a new approach for QoS provisioning in packet networks based on the notion of differentiated traffic engineering (DTE). We consider a single AS network capable of source based multi-path routing. We do not require sophisticated queuing or per-class scheduling at individual routers; instead, if a link is used to forward QoS sensitive packets, we maintain its utilization below a threshold. As a consequence, DTE eliminates the need for per-flow (IntServ) or per-class (DiffServ) packet processing tasks such as traffic classification, queueing, shaping, policing and scheduling in the core and hence poses a lower burden on the network management unit. Conversely, DTE utilizes network bandwidth much more efficiently than simple over-provisioning. In this paper, we propose a complete architecture and an algorithmic structure for DTE. We show that our scheme can be formulated as a non-convex optimization problem, and we present an optimal solution framework based on simulated annealing. We present a simulation-based performance evaluation of DTE, and compare our scheme to existing (gradient projection) methods. JA - INFOCOM 2005. 24th Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings IEEE VL - 4 M3 - 10.1109/INFCOM.2005.1498521 ER - TY - JOUR T1 - Optimal models of disjunctive logic programs: semantics, complexity, and computation JF - Knowledge and Data Engineering, IEEE Transactions on Y1 - 2004 A1 - Leone,N. A1 - Scarcello,F. A1 - V.S. Subrahmanian KW - complexity; KW - computational KW - disjunctive KW - function; KW - knowledge KW - LANGUAGE KW - Logic KW - minimal KW - model KW - nonmonotonic KW - objective KW - optimisation; KW - OPTIMIZATION KW - problems; KW - program KW - Programming KW - programming; KW - reasoning; KW - representation; KW - semantics; KW - stable KW - user-specified AB - Almost all semantics for logic programs with negation identify a set, SEM(P), of models of program P, as the intended semantics of P, and any model M in this class is considered a possible meaning of P with regard to the semantics the user has in mind. Thus, for example, in the case of stable models [M. Gelfond et al., (1988)], choice models [D. Sacca et al., (1990)], answer sets [M. Gelfond et al., (1991)], etc., different possible models correspond to different ways of "completing" the incomplete information in the logic program. However, different end-users may have different ideas on which of these different models in SEM(P) is a reasonable one from their point of view. For instance, given SEM(P), user U1 may prefer model M1 isin;SEM(P) to model M2 isin;SEM(P) based on some evaluation criterion that she has. We develop a logic program semantics based on optimal models. This semantics does not add yet another semantics to the logic programming arena - it takes as input an existing semantics SEM(P) and a user-specified objective function Obj, and yields a new semantics Opt(P)_ sube; SEM(P) that realizes the objective function within the framework of preferred models identified already by SEM(P). Thus, the user who may or may not know anything about logic programming has considerable flexibility in making the system reflect her own objectives by building "on top" of existing semantics known to the system. In addition to the declarative semantics, we provide a complete complexity analysis and algorithms to compute optimal models under varied conditions when SEM(P) is the stable model semantics, the minimal models semantics, and the all-models semantics. VL - 16 SN - 1041-4347 CP - 4 M3 - 10.1109/TKDE.2004.1269672 ER - TY - CONF T1 - Construction of an efficient overlay multicast infrastructure for real-time applications T2 - INFOCOM 2003. Twenty-Second Annual Joint Conference of the IEEE Computer and Communications. IEEE Societies Y1 - 2003 A1 - Banerjee,S. A1 - Kommareddy,C. A1 - Kar,K. A1 - Bhattacharjee, Bobby A1 - Khuller, Samir KW - application-layer; KW - applications; KW - communication; KW - criterion; KW - decentralized KW - distributed KW - entities; KW - forwarding KW - infrastructure; KW - Internet; KW - iterative KW - media-streaming KW - methods; KW - MSN; KW - Multicast KW - nodes; KW - NP-hard; KW - OPTIMIZATION KW - overlay KW - real KW - real-time KW - scheme; KW - service KW - systems; KW - TIME AB - This paper presents an overlay architecture where service providers deploy a set of service nodes (called MSNs) in the network to efficiently implement media-streaming applications. These MSNs are organized into an overlay and act as application-layer multicast forwarding entities for a set of clients. We present a decentralized scheme that organizes the MSNs into an appropriate overlay structure that is particularly beneficial for real-time applications. We formulate our optimization criterion as a "degree-constrained minimum average-latency problem" which is known to be NP-hard. A key feature of this formulation is that it gives a dynamic priority to different MSNs based on the size of its service set. Our proposed approach iteratively modifies the overlay tree using localized transformations to adapt with changing distribution of MSNs, clients, as well as network conditions. We show that a centralized greedy approach to this problem does not perform quite as well, while our distributed iterative scheme efficiently converges to near-optimal solutions. JA - INFOCOM 2003. Twenty-Second Annual Joint Conference of the IEEE Computer and Communications. IEEE Societies VL - 2 M3 - 10.1109/INFCOM.2003.1208987 ER - TY - CONF T1 - 3D face reconstruction from video using a generic model T2 - Multimedia and Expo, 2002. ICME '02. Proceedings. 2002 IEEE International Conference on Y1 - 2002 A1 - Chowdhury, A.R. A1 - Chellapa, Rama A1 - Krishnamurthy, S. A1 - Vo, T. KW - 3D KW - algorithm; KW - algorithms; KW - analysis; KW - Carlo KW - chain KW - Computer KW - Face KW - from KW - function; KW - generic KW - human KW - image KW - Markov KW - MCMC KW - methods; KW - model; KW - Monte KW - MOTION KW - optimisation; KW - OPTIMIZATION KW - processes; KW - processing; KW - recognition; KW - reconstruction KW - reconstruction; KW - sampling; KW - sequence; KW - sequences; KW - SfM KW - signal KW - structure KW - surveillance; KW - video KW - vision; AB - Reconstructing a 3D model of a human face from a video sequence is an important problem in computer vision, with applications to recognition, surveillance, multimedia etc. However, the quality of 3D reconstructions using structure from motion (SfM) algorithms is often not satisfactory. One common method of overcoming this problem is to use a generic model of a face. Existing work using this approach initializes the reconstruction algorithm with this generic model. The problem with this approach is that the algorithm can converge to a solution very close to this initial value, resulting in a reconstruction which resembles the generic model rather than the particular face in the video which needs to be modeled. We propose a method of 3D reconstruction of a human face from video in which the 3D reconstruction algorithm and the generic model are handled separately. A 3D estimate is obtained purely from the video sequence using SfM algorithms without use of the generic model. The final 3D model is obtained after combining the SfM estimate and the generic model using an energy function that corrects for the errors in the estimate by comparing local regions in the two models. The optimization is done using a Markov chain Monte Carlo (MCMC) sampling strategy. The main advantage of our algorithm over others is that it is able to retain the specific features of the face in the video sequence even when these features are different from those of the generic model. The evolution of the 3D model through the various stages of the algorithm is presented. JA - Multimedia and Expo, 2002. ICME '02. Proceedings. 2002 IEEE International Conference on VL - 1 M3 - 10.1109/ICME.2002.1035815 ER - TY - RPRT T1 - A Geometric Algorithm for Multi-Part Milling Cutter Selection Y1 - 2000 A1 - Yao,Zhiyang A1 - Gupta, Satyandra K. A1 - Nau, Dana S. KW - algorithms KW - computer aided manufacturing CAM KW - cutter selection KW - Manufacturing KW - multi-part process planning KW - Next-Generation Product Realization Systems KW - OPTIMIZATION AB - Mass customization results in smaller batch sizes in manufacturing that require large numbers of setup and tool changes. The traditional process planning that generates plans for one part at a time is no longer applicable.

In this paper, we propose the idea of process planning for small batch manufacturing, i.e., we simultaneously consider multiple parts and exploit opportunities for sharing manufacturing resources such that the process plan will be optimized over the entire set of parts. In particular, we discuss a geometric algorithm for multiple part cutter selection in 2-1/2D milling operations.

We define the 2-1/2D milling operations as covering the target region without intersecting with the obstruction region. This definition allows us to handle the open edge problem. Based on this definition, we first discuss the lower and upper bond of cutter sizes that are feasible for given parts. Then we introduce the geometric algorithm to find the coverable area for a given cutter. Following that, we discuss the approach of considering cutter loading time and changing time in multiple cutter selection for multiple parts. We represent the cutter selection problem as shortest path problem and use Dijkstra's algorithm to solve it. By using this algorithm, a set of cutters is selected to achieve the optimum machining cost for multiple parts.

Our research illustrates the multiple parts process planning approach that is suitable for small batch manufacturing. At the same time, the algorithm given in this paper clarifies the 2-1/2D milling problem and can also help in cutter path planning problem. PB - Institute for Systems Research, University of Maryland, College Park VL - ISR; TR 2000-42 UR - http://drum.lib.umd.edu//handle/1903/6139 ER -