With the advances in photography technologies, current image sensors are able to capture pictures with a high dynamic range up to eight orders of magnitude, closely approximating the sensitivity of human vision in the photopic regime. However, existing monitors and projectors have a significantly limited dynamic range, which is inadequate to reproduce the full spectrum of luminance values presented in natural scenes and captured by current sensors. When rendering high-dynamic-range (HDR) images on low-dynamic-range (LDR) display devices, tone mapping operators should be adopted for dynamic range compression. Despite its practical importance, some fundamental issues of HDR image rendering remain, limiting its broad applications. First, to compare image rendering algorithms, one often needs to collect human data on a set of (possibly cherry-picked) visual examples, which is biased and inefficient. Instead ofmanualpre-selection, we will describe a fair and efficient subjective assessment method byautomaticallysampling a minimum set of unbiased, diverse, and adaptive images that best differentiate among the competing methods. To demonstrate thegeneralityof our method, we will apply it to image rendering, as well as photographic style transfer and image-to-image translation, where the fair and efficient comparison is not well-practiced. Meanwhile, motivated by the lack of no-reference quality models in this field, we will construct a learning-based objective assessment method to evaluate HDR image rendering, regularized by other related computational photography tasks. Second, the design of most rendering algorithms for human perception is not guided by well-established objective assessment models, and therefore their perceptual optimality is unclear. Meanwhile, the enhancement capabilities of image rendering for machine perception may not always translate to downstream visual recognition tasks. This project will take initial steps towards perception-driven optimization of HDR image rendering. We will design a biologically-inspired rendering method by optimizing the proposed objective assessment model. To make it more practical, we will take into account different display constraints. We will also orient the proposed method within a perception-driven framework to assist machine vision (implemented by semantic segmentation) in challenging low-light conditions. Given the promising preliminary results, we believe that the proposed schemes will 1) reliably measure the progress of HDR image rendering and other related fields, 2) elucidate their strengths and weaknesses, and 3) provide novel and practical solutions that will significantly advance HDR image rendering for both human and machine perception.