Читать книгу Cyberphysical Smart Cities Infrastructures - Группа авторов - Страница 50

References

Оглавление

1 1 Park, J.H., Younas, M., Arabnia, H.R., and Chilamkurti, N. (2021). Emerging ICT applications and services‐big data, IoT, and cloud computing. International Journal of Communication Systems. https://onlinelibrary.wiley.com/doi/full/10.1002/dac.4668.

2 2 Amini, M.H., Imteaj, A., and Pardalos, P.M. (2020). Interdependent networks: a data science perspective. Patterns 1 100003. https://www.sciencedirect.com/science/article/pii/S2666389920300039.

3 3 Mohammadi, F.G. and Amini, M.H. (2019). Promises of meta‐learning for device‐free human sensing: learn to sense. Proceedings of the 1st ACM International Workshop on Device‐Free Human Sensing, pp. 44–47.

4 4 Amini, M.H., Mohammadi, J., and Kar, S. (2020). Promises of fully distributed optimization for IoT‐based smart city infrastructures. In: Optimization, Learning, and Control for Interdependent Complex Networks, M. Hadi Amini, 15–35. Springer.

5 5 Amini, M.H., Arasteh, H., and Siano, P. (2019). Sustainable smart cities through the lens of complex interdependent infrastructures: panorama and state‐of‐the‐art. In: Sustainable Interdependent Networks II, ( M. Hadi Amini, Kianoosh G. Boroojeni, S. S. Iyengar et al.), 45–68. Springer.

6 6 Iskandaryan, D., Ramos, F., and Trilles, S. (2020). Air quality prediction in smart cities using machine learning technologies based on sensor data: a review. Applied Sciences 10 (7): 2401.

7 7 Batty, M., Axhausen, K.W., Giannotti, F. et al. (2012). Smart cities of the future. The European Physical Journal Special Topics 214 (1): 481–518.

8 8 Deng, J., Dong, W., Socher, R. et al. (2009). ImageNet: A large‐scale hierarchical image database. 2009 IEEE Conference on Computer Vision and Pattern Recognition, IEEE, pp. 248–255.

9 9 Lin, T.‐Y., Maire, M., Belongie, S. et al. (2014). Microsoft COCO: Common objects in context. European Conference on Computer Vision, Springer, pp. 740–755.

10 10 Xiao, J., Hays, J., Ehinger, K.A. et al. (2010). Sun database: large‐scale scene recognition from abbey to zoo. 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, IEEE, pp. 3485–3492.

11 11 Griffin, G., Holub, A., and Perona, P. (2007). Caltech‐256 object category dataset.

12 12 Zhou, B., Lapedriza, A., Xiao, J. et al. (2014). Learning deep features for scene recognition using places database. Advances in Neural Information Processing Systems 27: 487–495.

13 13 Rajpurkar, P., Zhang, J., Lopyrev, K., and Liang, P. (2016). SQuAD: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250.

14 14 Wang, A., Singh, A., Michael, J. et al. (2018). GLUE: A multi‐task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461.

15 15 Zellers, R., Bisk, Y., Schwartz, R., and Choi, Y. (2018). SWAG: A large‐scale adversarial dataset for grounded commonsense inference. arXiv preprint arXiv:1808.05326.

16 16 Krishna, R., Zhu, Y., Groth, O. et al. (2017). Visual genome: connecting language and vision using crowdsourced dense image annotations. International Journal of Computer Vision 123 (1): 32–73.

17 17 Antol, S., Agrawal, A., Lu, J. et al. (2015). VQA: Visual question answering. Proceedings of the IEEE International Conference on Computer Vision, pp. 2425–2433.

18 18 Shenavarmasouleh, F. and Arabnia, H.R. (2020). DRDr: Automatic masking of exudates and microaneurysms caused by diabetic retinopathy using mask R‐CNN and transfer learning. arXiv preprint arXiv:2007.02026.

19 19 Shenavarmasouleh, F., Mohammadi, F.G., Amini, M.H., and Arabnia, H.R. (2020). DRDr II: Detecting the severity level of diabetic retinopathy using mask RCNN and transfer learning. arXiv preprint arXiv:2011.14733.

20 20 Shenavarmasouleh, F. and Arabnia, H. (2019). Causes of misleading statistics and research results irreproducibility: a concise review. 2019 International Conference on Computational Science and Computational Intelligence (CSCI), pp. 465–470.

21 21 Held, R. and Hein, A. (1963). Movement‐produced stimulation in the development of visually guided behavior. Journal of Comparative and Physiological Psychology 56 (5): 872.

22 22 Moravec, H. (1984). Locomotion, vision and intelligence.

23 23 Hoffmann, M. and Pfeifer, R. (2012). The implications of embodiment for behavior and cognition: animal and robotic case studies. arXiv preprint arXiv:1202.0440.

24 24 Brooks, R.A. (1991). New approaches to robotics. Science 253 (5025): 1227–1232.

25 25 Collins, S.H., Wisse, M., and Ruina, A. (2001). A three‐dimensional passive‐dynamic walking robot with two legs and knees. The International Journal of Robotics Research 20 (7): 607–615.

26 26 Iida, F. and Pfeifer, R. (2004). Cheap rapid locomotion of a quadruped robot: self‐stabilization of bounding gait. In: Intelligent Autonomous Systems, vol. 8, 642–649. The Netherlands: IOS Press Amsterdam.

27 27 Yamamoto, T. and Kuniyoshi, Y. (2001). Harnessing the robot's body dynamics: a global dynamics approach. Proceedings 2001 IEEE/RSJ International Conference on Intelligent Robots and Systems. Expanding the Societal Role of Robotics in the Next Millennium (Cat. No. 01CH37180), Volume 1, IEEE, pp. 518–525.

28 28 Bledt, G., Powell, M.J., Katz, B. et al. (2018). MIT cheetah 3: design and control of a robust, dynamic quadruped robot. 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), IEEE, pp. 2245–2252.

29 29 Hermann, K.M., Hill, F., Green, S. et al. (2017). Grounded language learning in a simulated 3D world. arXiv preprint arXiv:1706.06551.

30 30 Tenney, I., Das, D., and Pavlick, E. (2019). Bert rediscovers the classical NLP pipeline.

31 31 Pan, Y., Yao, T., Li, H., and Mei, T. (2017). Video captioning with transferred semantic attributes. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6504–6512.

32 32 Amirian, S., Rasheed, K., Taha, T.R., and Arabnia, H.R. (2020). Automatic image and video caption generation with deep learning: a concise review and algorithmic overlap. IEEE Access 8: 218386–218400.

33 33 Amirian, S., Rasheed, K., Taha, T.R., and Arabnia, H.R. (2020). Automatic generation of descriptive titles for video clips using deep learning. In: Springer Nature ‐ Research Book Series: Transactions on Computational Science & Computational Intelligence, Hamid R. Arabnia, Springer. 17–28.

34 34 Gao, L., Guo, Z., Zhang, H. et al. (2017). Video captioning with attention‐based LSTM and semantic consistency. IEEE Transactions on Multimedia 19 (9): 2045–2055.

35 35 Yang, Y., Zhou, J., Ai, J. et al. (2018). Video captioning by adversarial LSTM. IEEE Transactions on Image Processing 27 (11): 5600–5611.

36 36 Singh, A., Natarajan, V., Shah, M. et al. (2019). Towards VQA models that can read. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8317–8326.

37 37 Jayaraman, D. and Grauman, K. (2017). Learning image representations tied to egomotion from unlabeled video. International Journal of Computer Vision 125 (1–3): 136–161.

38 38 Jayaraman, D., Gao, R., and Grauman, K. (2018). Shapecodes: self‐supervised feature learning by lifting views to viewgrids. Proceedings of the European Conference on Computer Vision (ECCV), pp. 120–136.

39 39 Gao, R., Feris, R., and Grauman, K. (2018). Learning to separate object sounds by watching unlabeled video. Proceedings of the European Conference on Computer Vision (ECCV), pp. 35–53.

40 40 Parekh, S., Essid, S., Ozerov, A. et al. (2017). Guiding audio source separation by video object information. 2017 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), IEEE, pp. 61–65.

41 41 Pu, J., Panagakis, Y., Petridis, S., and Pantic, M. (2017). Audio‐visual object localization and separation using low‐rank and sparsity. 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, pp. 2901–2905.

42 42 Parekh, S., Essid, S., Ozerov, A. et al. (2017). Motion informed audio source separation. 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, pp. 6–10.

43 43 Asali, E., Shenavarmasouleh, F., Mohammadi, F. et al. (2020). DeepMSRF: A novel deep multimodal speaker recognition framework with feature selection. ArXiv, abs/2007.06809.

44 44 Aloimonos, J., Weiss, I., and Bandyopadhyay, A. (1988). Active vision. International Journal of Computer Vision 1 (4): 333–356.

45 45 Ballard, D.H. (1991). Animate vision. Artificial Intelligence 48 (1): 57–86.

46 46 Ballard, D.H. and Brown, C.M. (1992). Principles of animate vision. CVGIP: Image Understanding 56 (1): 3–21.

47 47 Bajcsy, R. (1988). Active perception. Proceedings of the IEEE 76 (8): 966–1005.

48 48 Roy, S.D., Chaudhury, S., and Banerjee, S. (2004). Active recognition through next view planning: a survey. Pattern Recognition 37 (3): 429–446.

49 49 Tung, H.‐Y.F., Cheng, R., and Fragkiadaki, K. (2019). Learning spatial common sense with geometry‐aware recurrent networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2595–2603.

50 50 Jayaraman, D. and Grauman, K. (2018). End‐to‐end policy learning for active visual categorization. IEEE Transactions on Pattern Analysis and Machine Intelligence 41 (7): 1601–1614.

51 51 Yang, J., Ren, Z., Xu, M. et al. (2019). Embodied visual recognition.

52 52 Das, A., Datta, S., Gkioxari, G. et al. (2018). Embodied question answering. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 2054–2063.

53 53 Wijmans, E., Datta, S., Maksymets, O. et al. (2019). Embodied question answering in photorealistic environments with point cloud perception. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6659–6668.

54 54 Das, A., Gkioxari, G., Lee, S. et al. (2018). Neural modular control for embodied question answering. arXiv preprint arXiv:1810.11181.

55 55 Gordon, D., Kembhavi, A., Rastegari, M. et al. (2018). IQA: Visual question answering in interactive environments. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4089–4098.

56 56 Hinchey, M.G., Sterritt, R., and Rouff, C. (2007). Swarms and swarm intelligence. Computer 40 (4): 111–113.

57 57 Wang, J., Feng, Z., Chen, Z. et al. (2018). Bandwidth‐efficient live video analytics for drones via edge computing. 2018 IEEE/ACM Symposium on Edge Computing (SEC), IEEE, pp. 159–173.

58 58 Camazine, S., Visscher, P.K., Finley, J., and Vetter, R.S. (1999). House‐hunting by honey bee swarms: collective decisions and individual behaviors. Insectes Sociaux 46 (4): 348–360.

59 59 Langton, C.G. (1995). Artificial Life: An Overview. Cambridge, MA: MIT.

60 60 Hara, F. and Pfeifer, R. (2003). Morpho‐Functional Machines: The New Species: Designing Embodied Intelligence. Springer Science & Business Media.

61 61 Murata, S., Kamimura, A., Kurokawa, H. et al. (2004). Self‐reconfigurable robots: platforms for emerging functionality. In: Embodied Artificial Intelligence, (Fumiya Iida, Rolf Pfeifer, Luc Steels et al.), 312–330. Springer.

62 62 Steels, L. (2001). Language games for autonomous robots. IEEE Intelligent systems 16 (5): 16–22.

63 63 Steels, L. (2003). Evolving grounded communication for robots. Trends in Cognitive Sciences 7 (7): 308–312.

64 64 Durrant‐Whyte, H. and Bailey, T. (2006). Simultaneous localization and mapping: Part I. IEEE Robotics and Automation Magazine 13 (2): 99–110.

65 65 Gupta, S., Davidson, J., Levine, S. et al. (2017). Cognitive mapping and planning for visual navigation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2616–2625.

66 66 Zhu, Y., Mottaghi, R., Kolve, E. et al. (2017). Target‐driven visual navigation in indoor scenes using deep reinforcement learning. 2017 IEEE International Conference on Robotics and Automation (ICRA), IEEE, pp. 3357–3364.

67 67 Pomerleau, D.A. (1989). Alvinn: An autonomous land vehicle in a neural network. In: Advances in Neural Information Processing Systems, 305–313. https://apps.dtic.mil/sti/pdfs/ADA218975.pdf.

68 68 Sadeghi, F. and Levine, S. (2016). CAD2RL: Real single‐image flight without a single real image. arXiv preprint arXiv:1611.04201.

69 69 Wu, Y., Wu, Y., Gkioxari, G., and Tian, Y. (2018). Building generalizable agents with a realistic and rich 3D environment. arXiv preprint arXiv:1801.02209.

70 70 Kolve, E., Mottaghi, R., Han, W. et al. (2017). AI2‐THOR: An interactive 3D environment for visual AI. arXiv preprint arXiv:1712.05474.

71 71 Xia, F., Zamir, A.R., He, Z. et al. (2018). Gibson Env: real‐world perception for embodied agents. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9068–9079.

72 72 Yan, C., Misra, D., Bennnett, A. et al. (2018). CHALET: Cornell house agent learning environment. arXiv preprint arXiv:1801.07357.

73 73 Savva, M., Chang, A.X., Dosovitskiy, A. et al. (2017). MINOS: Multimodal indoor simulator for navigation in complex environments. arXiv preprint arXiv:1712.03931.

74 74 Savva, M., Kadian, A., Maksymets, O. et al. (2019). Habitat: A platform for embodied AI research. Proceedings of the IEEE International Conference on Computer Vision, pp. 9339–9347.

75 75 Datta, S., Maksymets, O., Hoffman, J. et al. (2020). Integrating egocentric localization for more realistic point‐goal navigation agents. arXiv preprint arXiv:2009.03231.

76 76 Song, S., Yu, F., Zeng, A. et al. (2017). Semantic scene completion from a single depth image. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1746–1754.

77 77 Chang, A., Dai, A., Funkhouser, T. et al. (2017). Matterport3D: learning from RGB‐D data in indoor environments. arXiv preprint arXiv:1709.06158.

78 78 Jaderberg, M., Mnih, V., Czarnecki, W.M. et al. (2016). Reinforcement learning with unsupervised auxiliary tasks. arXiv preprint arXiv:1611.05397.

79 79 Dosovitskiy, A. and Koltun, V. (2016). Learning to act by predicting the future. arXiv preprint arXiv:1611.01779.

80 80 Singh, S.P. and Sutton, R.S. (1996). Reinforcement learning with replacing eligibility traces. Machine Learning 22 (1–3): 123–158.

81 81 Schulman, J., Wolski, F., Dhariwal, P. et al. (2017). Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347.

82 82 Mur‐Artal, R. and Tardos, J.D. (2017). ORB‐SLAM2: An open‐source SLAM system for monocular, stereo, and RGB‐D cameras. IEEE Transactions on Robotics 33 (5): 1255–1262.

83 83 Mishkin, D., Dosovitskiy, A., and Koltun, V. (2019). Benchmarking classic and learned navigation in complex 3D environments. arXiv preprint arXiv:1901.10915.

84 84 Anderson, P., Chang, A., Chaplot, D.S. et al. (2018). On evaluation of embodied navigation agents. arXiv preprint arXiv:1807.06757.

85 85 Jackson, F. (1982). Epiphenomenal qualia. The Philosophical Quarterly (1950‐) 32 (127): 127–136.

86 86 Tye, M. (1997). Qualia. https://seop.illc.uva.nl/entries/qualia/.

87 87 Hosoda, K. (2004). Robot finger design for developmental tactile interaction. In: Embodied Artificial Intelligence, (Fumiya Iida, Rolf Pfeifer, Luc Steels et al.), 219–230. Springer.

88 88 Floreano, D., Mondada, F., Perez‐Uribe, A., and Roggen, D. (2004). Evolution of embodied intelligence. In: Embodied Artificial Intelligence, (Fumiya Iida, Rolf Pfeifer, Luc Steels et al.), 293–311. Springer.

89 89 Floreano, D., Husbands, P., and Nolfi, S. (2008). Evolutionary Robotics. Technical report. Springer‐Verlag. https://infoscience.epfl.ch/record/111527.

90 90 Pfeifer, R. and Iida, F. (2004). Embodied artificial intelligence: trends and challenges. In: Embodied Artificial Intelligence, (Fumiya Iida, Rolf Pfeifer, Luc Steels et al.), 1–26. Springer.

Cyberphysical Smart Cities Infrastructures

Подняться наверх